source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
102,906
in JavaScript: function getTopCustomersOfTheYear(howManyCustomers, whichYear) { // Some code here. } getTopCustomersOfTheYear(50, 2010); in C#: public List<Customer> GetTopCustomersOfTheYear(int howManyCustomers, int whichYear) { // Some code here } List<Customer> customers = GetTopCustomersOfTheYear(50, 2010); in PHP: public function getTopCustomersOfTheYear($howManyCustomers, $whichYear) { // Some code here } $customers = getTopCustomersOfTheYear(50, 2010); Is there any language out there which support this syntax: function GetTop(x)CustomersOfTheYear(y) { // Some code here } returnValue = GetTop(50)CustomersOfTheYear(2010); Isn't it more semantic, more readable form of writing a function? Update: The reason I'm asking this question is that, I'm writing an article about a new syntax for a new language. However, I thought that having such syntax for declaring methods could be nicer and more friendly to developers and would decrease learning-curve of the language, because of being more closer to natural language. I just wanted to know if this feature has already been contemplated upon or not.
Yes, and yes. Yes there's such a language, and yes, many people find it more readable once they get used to it. In Objective-C, the method would be: - (NSArray*)getTop:(int)count customersOfTheYear:(Year)year; That's actually a pretty contrived example that doesn't read very well, so here's a better one from actual code: + (UIColor *)colorWithRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue alpha:(CGFloat)alpha; That the prototype for a method that returns a new UIColor instance using the red, green, blue, and alpha values. You'd call it like this: UIColor *violet = [UIColor colorWithRed:0.8 green:0.0 blue:0.7 alpha:1.0]; Read more about message names with interspersed parameters in The Objective-C Programming Language .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/102906", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
102,958
At my place of employment, we've had some serious growing pains. We went from a development team of 3 to 10, and the company itself has grow 30% in the past year. By most measurements, we're doing well. Unfortunately, the quality of our software has suffered. In a meeting today with my division's manager, I proposed a project team meeting a day or two after the product has launched. We could discuss budget concerns, scope, what went wrong, and what went right. Ideally, learning from our mistakes. We build sites/apps for other people, so our time is either billable on non-billable. A meeting like this would fall under the latter. My manager shot it down almost immediately: "That time isn't billable. It'll make us get behind on another project because we waste time at the end of that one talking about it." I was so caught off guard by this logic that I didn't even bother fighting him on it. So my question: I see the value is post-project meetings, but he doesn't. Is there documented proof of post-project meetings helping save time and money in the long (or short) run? Intuitively I think it will/would, but he clearly is more worried about a small amount of un-billable time from the 5 people that would need to be there.
Looking Back, Looking Ahead would be close to documented proof on the idea. The Project Post-Mortem: A Valuable Tool for Continuous Improvement would be a blog post about it. The Art Of The Post-Mortem has this point about the idea: The origins of the Post-Mortem are with the military, who routinely use this kind of process to debrief people on the front lines. But its management application is essential to any high performing, learning organization.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/102958", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35062/" ] }
102,999
The version of python which I am using is 2.6 , and there is a 2.7 and 3.x. Usually I use python for some trivial program/snippet. I realize there are some major difference between 2.x and 3.x. I would really like to know, if I am going to make a bigger project with python, which version of python should I use? Should I upgrade to 2.7, or go to 3.x or stay with 2.6? The decision should be based on these terms: Number of user in the internet as a community. More users mean more open-source package and help from them. Functionality. Support from official development team. Compatibility for existing module/package. Thanks!
I would suggest Python 2.7 myself. It's the latest release in the Python 2.x series. Most of the Python modules are made to work with the Python 2.x. There is a movement to try and move to Python 3, but any of the Python 3 modules are written for both 2 and 3. Remember to not use old features which are not available in Python 3, so that you can just 2to3 your code to make it run on Python 3. If you go with Python 3, you're one of the early adopters and you will likely have to tell others to download Python 3 (a lot of computers will just have Python 2). On the other hand, new features are only going to come to Python 3. Python 2 is permanently in maintenence. So I would not suggest still using Python 2 in 10 years. If you want to keep an eye on Python packages and their compatibility with Python 3, you can keep an eye on this site: http://python3wos.appspot.com/
{ "source": [ "https://softwareengineering.stackexchange.com/questions/102999", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15309/" ] }
103,031
I'm working on a medium sized (100k lines) code base, it's all relatively recent code (less than a year old) and has good unit test coverage. I keep coming across methods which are either no longer used anywhere or only referenced in unit tests which only test that specific method. Should I remove this code if I'm certain that it's no longer needed? Reasons to remove it: Less code, less bugs Less code is easier for others to digest It's still under source control Reasons to keep it: Can be used as reference It may be useful sometime It may have been written to 'round-out' the functionality for a class
Most of your reasons to keep it are utterly irrelevant, put simply. If the code isn't used, throw it away- any benefit involved in keeping it can be trivially derived from source control. At most, leave a comment saying which revision to find it in. Quite simply, the sooner you cut the code, the sooner you don't have to waste time maintaining it, compiling it, and testing it. Those advantages massively outweigh the trivial benefits that you have outlined, all of which can be derived from source control anyway.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103031", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35054/" ] }
103,233
Quite simple, why would I want to write code that works for all cases and scalable data when all I need to do is repeat the same process a few times with a few minor tweaks? I'm unlikely to need to edit this again any time soon. It looks like a lot of less work to just go... function doStuff1(){/*.a.*/} function doStuff2(){/*.b.*/} function doStuff3(){/*.c.*/} And if I ever need to add something... function doStuff4(){/*.d.*/} And if I need to remove it, I remove it. It's harder to figure out how to make all of those into one straight-forward pattern that I can just feed data into and deal with all the cases, and make a bunch of changes I don't feel like I'm ever going to have to do. Why be DRY when it looks like a quick cut+paste is going to be so much less work?
If you repeat yourself, you can create maintainability issues. If doStuff1-3 all have similarly structured code and you fix a problem in one, you could easily forget to fix the problem in other places. Also, if you have to add a new case to handle, you can simply pass different parameters into one function rather than copy-pasting all over the place. However, DRY is often taken to an extreme by clever programmers. Sometimes to not repeat yourself you have to create abstractions so obtuse that your teammates cannot follow them. Sometimes the structure of two things is only vaguely similar but different enough. If doStuff1-4 are different enough such that refactoring them to not repeat yourself causes you to have to write unnatural code or undergo clever coding backflips that will cause your team to glare at you, then it may be ok to repeat yourself. I've bent over backwards to not repeat myself a couple of times in unnatural ways and regretted the end product. I always err on the side of DRY, in the rare case repeating myself when I think that the benefits in readability are worth the risks of someone forgetting to fix a bug in multiple places. Taking that advice into account, it sounds like in your case repeat the same process a few times with a few minor tweaks I would definitely work hard to not repeat myself in your case. Assuming minimal "tweaks" -- they can be handled with different parameters that impact the behavior or perhaps dependancy-injected to perform different subtasks. Why be DRY when it looks like a quick cut+paste is going to be so much less work? Famous last words. You will regret thinking that when a junior engineer tweaks/fixes/refactors one doStuff and doesn't even realize the others exists. Hilarity ensues. No mostly heartburn ensues. Every line of code costs more. How many code paths must you test with so many repeated functions? If one function, you just have to test one main path with a few behavioral modifications. If copy-pasted you have to test every doStuff separately. Odds are you'll miss one and a customer may have an unwelcome bug and you may have some unwelcome emails in your inbox.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103233", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1525/" ] }
103,263
I'm starting with SCRUM and I have a problem understanding one thing. How does SCRUM handle backlog items that take longer than one sprint?
Such items are either called Epic and must be divided into smaller user stories which are shorter then a single sprint and because of that can be planned, or Theme which will be divided into Epics and those into common stories. Epics and Themes share the main characteristic - high level of uncertainty = they cannot be properly estimated (estimate is usually very high and because of that they do not fit into a single sprint). So it is good to start with such stories but you cannot plan them until the product owner breaks them into smaller specific stories. These stories are used only to make note of some bigger requested features (Epic) or whole feature sets (Theme). Breaking these stories will make the feature specific. It also follows Iceberg structure of the product backlog.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103263", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30960/" ] }
103,273
If yes, what precautions should I take?
With an MIT/X11-licensed product: you CAN : re-use the code freely for your own use, re-use the code freely for non-commercial AND commercial re-distribution, whether in source or binary form. you CANNOT : claim authorship of the software, thus you cannot attack the original author for using or publishing his original version. So, yes, you CAN use MIT/X11-licensed plug-ins in your commercial application . MIT/X11 is basically a simple contract that says: Person or company X created Y. Y belongs to X, but X is granting you the right to use it and do whatever you want with it. X cannot be held accountable for anything that goes downhill with what you do with Y.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103273", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15798/" ] }
103,385
I work with a code base that is over 500K lines of code. It is in serious need of refactoring. There have been refactoring efforts identified that will take longer than the normal two week sprint. These can't be broken up into smaller tasks as I have seen suggested in other answers on this site. The product needs to work at the end of the iteration, and partial refactoring will leave the system in an unusable state as the dependency between items is horrible. So what would be the best way to approach this hurdle? Again I mention, breaking it down into smaller pieces is is not an option, that has already been done. Update: People seem to need an explanation of why this can't be fit into a 2 week sprint. There is more involved in a sprint than just writing code. We have a policy of no code without tests. That policy did not always exist and a large portion of the codebase does not have them. Also some of our integration tests are still manual tests. The issue is not that the refactoring itself is so large. It is with the fact that small changes have an effect on many parts of the system, and we need to ensure that those parts still operate correctly. We can't put off or extend a sprint because we have monthly hotfixes. So this change extending past a sprint cannot stop the other work being added to the hotfix. Refactoring vs Redesign: Just because our development process is not efficent enough to handle this refactoring in a two week cycle does not warrant renaming it to a redesign. I would like to believe that in the future we could accomplish the exact same task within a two week cycle as our process improves. The code in question here has not had to change in a very long time, and is quite stable. Now, as the direction of the company is becoming more adaptable to change, we want this portion of the code base to be as adaptable as the rest. Which requires refactoring it. Based on the answers here, it is becoming apparent that there is missing scaffolding necessary to make this refactoring work in the time frame of normal sprints. Answer: I am going to do the branch and merge approach that Corbin March suggested the first time so we can learn more about these problem areas and how to identify the missing tests. I think moving forward, we should take the approach that Buhb suggested of identifying the areas that are missing tests and implement those first, then do the refactoring. It will allow us to keep to our normal two week sprint cycle, just like many here have been saying should always be the case for refactoring.
My suggestion: Create a branch Merge daily from trunk to your branch and resolve conflicts. Work until it's done. Your branch may be outside core development for several sprints. Merge back to trunk. There's no getting around the fact that it will probably get ugly. I don't envy you. In my experience, when you drastically change a project, it's easier to merge ongoing development into the new paradigm versus somehow merging the new paradigm into a now-changed trunk after everything is finished. Still, it's going to hurt.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103385", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23377/" ] }
103,501
I'll be working as a development lead for a startup and I've suggested that we use VMs for development. I'm not talking about each developer having a desktop with VMs for testing/development, I mean having a server rack where all VMs are managed and have the developers work from a microPC (ChromeOS anyone?) locally, or even remotely from their home computer. To me, the benefits are the fact that it's extremely scalable, cheaper in the long run, easier to manage and that we utilize the hardware its maximum potential. As for cons, I can't think of any particular showstoppers other than we'll need someone to setup/maintain said setup. I was hoping that some of you might of had a similar setup at your place of employment and be able to weight in with your opinions. Thanks.
What are you hoping to save, as a fraction of the development budget? It seems to me that you are worrying about an epsilon. The cost of machines for developers is less than 5% of the total cost to keep a developer on staff. Therefore the only important question is "will it save developers time?" It could, if they don't have to spend time installing and upgrading development software. Or it could cost time, if the network goes down, or the server goes down, or, most likely, if the responsiveness across the net is the least bit lacking. Modern development depends on keystroke-by-keystroke interaction with an IDE, or at least a very intelligent editor. Delaying that interaction by even a few tens of milliseconds destroys developer productivity. There is also the cost for developers to learn this new way of working. If that takes even one day per developer, you have already spent more in labor than the cost of a new desktop. These are not objections to VMs, but potential objections to remote development.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103501", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35213/" ] }
103,508
Today I've just seen this article which described the relevance of SOLID principle in F# development- F# and Design principles – SOLID And while addressing the last one - "Dependency inversion principle", the author said: From a functional point of view, these containers and injection concepts can be solved with a simple higher order function, or hole-in-the-middle type pattern which are built right into the language. But he didn't explain it further. So, my question is, how is the dependency inversion related to higher order functions?
Dependency Inversion in OOP means that you code against an interface which is then provided by an implementation in an object. Languages that support higher language functions can often solve simple dependency inversion problems by passing behaviour as a function instead of an object which implements an interface in the OO-sense. In such languages, the function's signature can become the interface and a function is passed in instead of a traditional object to provide the desired behaviour. The hole in the middle pattern is a good example for this. It let's you achieve the same result with less code and more expressiveness, as you don't need to implement a whole class that conforms to an (OOP) interface to provide the desired behaviour for the caller. Instead, you can just pass a simple function definition. In short: Code is often easier to maintain, more expressive and more flexible when one uses higher order functions. An example in C# Traditional approach: public IEnumerable<Customer> FilterCustomers(IFilter<Customer> filter, IEnumerable<Customers> customers) { foreach(var customer in customers) { if(filter.Matches(customer)) { yield return customer; } } } //now you've got to implement all these filters class CustomerNameFilter : IFilter<Customer> /*...*/ class CustomerBirthdayFilter : IFilter<Customer> /*...*/ //the invocation looks like this var filteredDataByName = FilterCustomers(new CustomerNameFilter("SomeName"), customers); var filteredDataBybirthDay = FilterCustomers(new CustomerBirthdayFilter(SomeDate), customers); With higher order functions: public IEnumerable<Customer> FilterCustomers(Func<Customer, bool> filter, IEnumerable<Customers> customers) { foreach(var customer in customers) { if(filter(customer)) { yield return customer; } } } Now the implementation and invocation become less cumbersome. We don't need to supply an IFilter implementation anymore. We don't need to implement classes for the filters anymore. var filteredDataByName = FilterCustomers(x => x.Name.Equals("CustomerName"), customers); var filteredDataByBirthday = FilterCustomers(x => x.Birthday == SomeDateTime, customers); Of course, this can already be done by LinQ in C#. I just used this example to illustrate that it's easier and more flexible to use higher order functions instead of objects which implement an interface.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103508", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/963/" ] }
103,567
I find myself repeatedly annoyed by having to teach freshmen about special language rules (like array-to-pointer decay) that have absolutely nothing to do with programming in itself. So I wondered: What is the programming language with the smallest number of special language rules, where everything is first class and can be composed without annoying technical restrictions? Wouldn't such a language be the perfect teaching language? Moderator Note We're looking for long answers that provide some explanation and context. Don't just list a language: please explain why you think the language answers the question. Answers that don't explain anything will be deleted. See Good Subjective, Bad Subjective for more information.
When it comes to 'very few rules', I would argue, Lisp or Smalltalk would win. The bare Syntax can be written on one beer tab. But in my experience, the simplicity of Lisp and Smalltalk does not mean they are simple to understand and easy to teach. While not the 'pure' way, in my experience the to-do-list-style of imperative languages is the easiest to grasp for newbies. Therefore, I would suggest Python, Ruby or something of similar abstraction : You find (nearly) every basic concept in them (OK, no pointers), but you don't need to understand it from the start to make something working.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103567", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3684/" ] }
103,659
I am considering building an application, which, at its core, would consist of thousands of if...then...else statements. The purpose of the application is to be able to predict how cows move around in any landscape. They are affected by things like the sun, wind, food source, sudden events etc. How can such an application be managed? I imagine that after a few hundred IF-statements, it would be as good as unpredictable how the program would react and debugging what lead to a certain reaction would mean that one would have to traverse the whole IF-statement tree every time. I have read a bit about rules engines, but I do not see how they would get around this complexity.
It sounds like all these conditional statements that you're talking about should really be data that configures your program rather than part of your program itself. If you can treat them that way, then you'll be free to modify the way your program works by just changing its configuration instead of having to modify your code and recompile every time you want to improve your model. There are a lot of different ways to model the real world, depending on the nature of your problem. Your various conditions might become rules or constraints that are applied to the simulation. Instead of having code that looks like: if (sunLevel > 0.75) { foreach(cow in cows) { cow.desireForShade += 0.5; } } if (precipitation > 0.2) { foreach(cow in cows) { cow.desireForShelter += 0.8; } } you can instead have code that looks like: foreach(rule in rules) { foreach (cow in cows) { cow.apply(rule); } } Or, if you can develop a linear program that models cow behavior given a number of inputs, each constraint might become a line in a system of equations. You might then turn that into a Markov model that you can iterate. It's hard to say what the right approach is for your situation, but I think you'll have a much easier time of it if you consider your constraints to be inputs to your program and not code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103659", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5094/" ] }
103,720
It is always difficult for me to choose between singular and plural forms for classes names: CustomerRepository vs. CustomersRepository CustomerService vs. CustomersService CustomerController vs. CustomersController And for composite names it is even more difficult: OrderCustomerRepository vs. OrderCustomersRepository vs. OrdersCustomersRepository What approach do you prefer and why?
The only thing I pluralize is collections. foreach (var customer in customers) { // do something with customer } All of your examples are individual objects, so they are not pluralized. Yes, the names refer to objects that might have multiple instances, but all you need to know in the name is the object entity (i.e. customer). So in all of your examples, the singular is the correct form. Makes life much easier.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103720", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7369/" ] }
103,807
I was reading the Wikipedia article on Douglas McIlroy and found a quote that mentions "The real hero of programming is the one who writes negative code." What does that mean?
It means reducing lines of code, by removing redundancies or using more concise constructs. See for example this famous anecdote from the original Apple Lisa developer team: When the Lisa team was pushing to finalize their software in 1982, project managers started requiring programmers to submit weekly forms reporting on the number of lines of code they had written. Bill Atkinson thought that was silly. For the week in which he had rewritten QuickDraw’s region calculation routines to be six times faster and 2000 lines shorter, he put "-2000" on the form. After a few more weeks the managers stopped asking him to fill out the form, and he gladly complied.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103807", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
103,840
After reading the book The Pragmatic Programmer , one of the arguments I found most interesting was "write code that writes code". I tried searching over the net for some more explanations or articles about it, and while I found some good articles on the subject, I still haven't found any specific code implementation or good examples. I feel it's still not a so common argument, something that lacks documentation or it's not embraced by so many people, and I would like to know more about it. What do you think about the subject? Is it something that will really increase your productivity? What are some good resources on the subject, among books, blogs, slideshows, etc? Some code examples would be greatly appreciated to allow me to better understand its implementation. Here's the wiki page on the subject with various relevant programming techniques, like Meta Programming, Generative Programming and Code Generation.
In the Lisp world, it is quite common to see the code which writes code which writes code (and so on). So, any decently sized Lisp or Scheme project will serve as a good code example. I'd recommend looking at the Racket compiler and runtime sources, as well as Bigloo , their libraries are just brilliant. As for the productivity: I'm using metaprogramming as a dominant technique in almost all of my development work, and it clearly helps a lot, both reducing the code size and increasing its readability. The key is in using Domain Specific Languages , and metaprogramming is one of the most efficient ways of implementing them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32325/" ] }
103,893
Consider these methods: public List<Employee> GetAllEmployees() { using (Entities entities = new Entities()) { return entities.Employees.ToList(); } } public List<Job> GetAllJobs() { using (Entities entities = new Entities()) { return entities.Jobs.ToList(); } } public List<Task> GetAllTasksOfTheJob(Job job) { using (Entities entities = new Entities()) { return entities.Tasks.Where(t => t.JobId == job.Id).ToList(); } } Using block is the same and has been repeated 3 times here (of course, more than 100 times in the real application). How is it possible to implement DRY (Don't Repeat Yourself) principal for using block? Is it considered a breach of DRY principal at all? Update: I'm not talking about what has been implemented inside the using block. What I actually mean here, is the using (Entities entities = new Entities()) . This line is repeated 100 times or more.
One idea would be to wrap it with a function that takes a Func . Something like this public K UsingT<T,K>(Func<T,K> f) where T:IDisposable,new() { using (T t = new T()) { return f(t); } } Then your above code becomes public List<Employee> GetAllEmployees() { return UsingT<Entities,List<Employee>>(e=>e.Employees.ToList()); } public List<Job> GetAllJobs() { return UsingT<Entities,List<Job>>(e=>e.Jobs.ToList()); } public List<Task> GetAllTasksOfTheJob(Job job) { return UsingT<Entities,List<Task>>(e=>e.Tasks.Where(t => t.JobId == job.Id).ToList()); } I made Entities a type param too, because I'm assuming you have more than one type you're doing this with. If you're not, you could remove it and just use the type param for the return type. To be honest though this sort of code doesn't help readability at all. In my experience the more Jr co-workers have a really tough time with it as well. Update Some additional variations on helpers you might consider //forget the Entities type param public T UsingEntities<T>(Func<Entities,T> f) { using (Entities e = new Entities()) { return f(e); } } //forget the Entities type param, and return an IList public IList<T> ListFromEntities<T>(Func<Entities,IEnumerable<T>> f) { using (Entities e = new Entities()) { return f(e).ToList(); } } //doing the .ToList() forces the results to enumerate before `e` gets disposed.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103893", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
103,897
I am a C# programmer, and most of my development is for websites along with a few Windows applications. As far as C goes, I haven't used it in a long time, as there was no need to. It came to me as a surprise when one of my friends said that she needs to learn C for testing jobs, while I was helping her learn C#. I figured that someone would only learn C for testing if there is development being done in C. From my knowledge, all development related to COM and hardware design is also done in C++. Therefore, learning C doesn't make sense if you need to use C++. I also don't believe in historic significance, so why waste time and money in learning C? Is C still used in any kind of new software development or anything else?
C has the advantage that it is a relatively small language , which makes it easy to implement a C compiler (whereas a C++ compiler is a monster to write), and makes it easier to learn the language . Also see the TIOBE index , according to which C slightly ahead of C++. In (IMO) decreasing order of justification, C is still used a lot for Embedded stuff It's way easier to port a C compiler to a small platform than it is to port a C++ compiler. Also, C advocates claim that C++ "does too much behind their backs". However, IMO that's FUD. Systems programming Again, that's usually due to claims that it is easier to "know what the compiler is doing". However, many embedded programs would benefit from, e.g., templates and other C++ key features. Open source software That's mostly an attitude problem, though: OSS has always preferred C over C++ (whereas it's the opposite in large parts of the industry). Torvalds' irrational hatred might actually be the most important reason for this on Linux .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103897", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32803/" ] }
103,899
I'm suffering a crisis of confidence in my ability as a computer programmer. Yesterday I tried to come up with my own shortest path algorithm for a graph and after some hours I simply threw in the towel and learned Dijkstra's algorithm. Is this the kind of thing a good programmer should be able to "reinvent" in a couple of hours or am I being unrealistic? Oh well, at least I was able to reinvent bubble sort :D
A good programmer should realize that a great algorithm has already been written to solve a problem and doesn't waste time re-inventing wheels. I doubt Dijkstra came up with the shortest path algorithm in a few hours, so that seems like a really high standard to use for determining if someone is a 'good programmer'
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103899", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35341/" ] }
103,914
The static keyword on a member in many languages mean that you shouldn't create an instance of that class to be able to have access to that member. However, I don't see any justification to make an entire class static . Why and when should I make a class static ? What benefits do I get from making a class static ? I mean, after declaring a static class, one should still declare all members which he/she wants to have access to without instantiation, as static too. This means that for example, Math class could be declared normal (not static), without affecting how developers code. In other words, making a class static or normal is kind of transparent to developers.
It makes it obvious to users how the class is used. For instance, it would be complete nonsense to write the following code: Math m = new Math(); C# doesn’t have to forbid this but since it serves no purpose, might as well tell the user that. Certain people (including me) adhere to the philosophy that programming languages (and APIs …) should be as restrictive as possible to make them hard to use wrong: the only allowed operations are then those that are meaningful and (hopefully) correct.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/103914", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
104,048
My professor keeps referring to this Java example when he speaks of "robust" code: if (var == true) { ... } else if (var == false) { ... } else { ... } He claims that "robust code" means that your program takes into account all possibilities, and that there is no such thing as an error - all situations are handled by the code and result in valid state, hence the "else". I am doubtful, however. If the variable is a boolean, what is the point of checking a third state when a third state is logically impossible? "Having no such thing as an error" seems ridiculous as well; even Google applications show errors directly to the user instead of swallowing them up silently or somehow considering them as valid state. And it's good - I like knowing when something goes wrong. And it seems quite the claim to say an application would never have any errors. So what is the actual definition of "robust code"?
what is the point of checking a third state when a third state is logically impossible? What about a Boolean? that allows for a NULL state that is neither true nor false. Now what should the software do? Some software has to be highly crash-resistant like pacemakers. Ever seen someone add a column to a database that was a Boolean and initialize the current data to NULL initially? I know I've seen it. Here are a few links that discuss what it means to be robust in terms of software: Robust Programming Robust Definition Robustness, the forgotten code quality. How to write robust code If you think there is one universally agreed upon definition of "robust" here, good luck. There can be some synonyms like bomb-proof or idiot-proof. The Duct Tape Programmer would be an example of someone that usually writes robust code at least in my understanding of the terms.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104048", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6994/" ] }
104,096
I am working in a private bank, a leading mid size bank in local market. We are going to create our core banking solution. Existing solution has been developed on Java using IBM Visual Age 4.0. It is very important to discuss architecture first, we have currently more than 350 branches working in standalone mode, and it means they are working in self contained environment. They have their own database server (IBM DB2 9.7) and they are communicating with other branches via sockets to send and receive data. Having experience of .NET for more than 5 years I am trying to convince my superiors to choose .NET platform, but they are reluctant and unwilling. It is my job to encourage them for choosing best available platform to create large scale enterprise application. In simple word, we are going to create a very large scale enterprise financial application, a centralize and integrated which connects all branch networks plus having scalable, solid architecture that easily evolve over time. I want professional people to comment on above scenarios. Which platform to choose .NET or Java? Our all resource is currently working in Java, we have homogeneous environment (no Linux, no Mac and no UNIX). Any idea, any thoughts, any points technical or non-technical i.e. administrative or management point of view will be really appreciated.
Let's talk about costs: You state that everything has been done in Java so far? Why change then? You might use parts of the old system or create a reusable domain model. Integration will be easier. The developers are probably used to Java, so why would you spend money to train them on .NET? There's no reason for this, as .NET has no outstanding advantage over Java in your scenario. Development costs are most likely the biggest lot, followed by maintenance costs. Why would you want to increase these for a personal preference and little to no infrastructural/architectural gain? So if I was your boss, you'd better prove that in the long run it's cheaper (while maintaining quality) to migrate to .NET. But I doubt you can prove that. Let's talk about the strategic decisions: So I have a java environment, I don't have to pay huge license fees. Most of the software I use is open source and Java's portability is great. Why should I lock myself into sort of an unportable one vendor system? There'd better be a reason for this! Better support for the systems? Better scalability and distribution? Not really. Please take my advice: In your situtation I wouldn't dare switching from Java to .NET. There'e no obvious reasons to do it. The primary strength of .NET is still rich GUIs, quite in contrast to Java. Maybe frontend/client software can be written in .NET, but for your backend, I'd stick with Java and I wouldn't try to start the grand rewrite in .NET. Please note that as a developer, I, too, prefer .NET to Java. But as always such decisions depend on various other factors. From a manager's perpective, I can see no reason to change to platforms. Quite the contrary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104096", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35419/" ] }
104,332
I am looking for some best practice strategies for unit testing code written for embedded system. By embedded system, I mean code such as device drivers, ISR handlers etc., stuff that are pretty close to the metal. Most of the unit tests are not possible without testing it on the hardware with the aid of a ICE. Sometimes, the embedded unit also needs to be hooked up to other stimulus such as a mechanical switches, stepper motors and light bulbs. This usually occurs in a manual fashion, automation would be great but hard and expensive to achieve. Update I came across a C testing framework that seems to be quite successful in testing embedded projects. It uses the ideas of mocking hardware. Check out Unity , CMock , and possibly Ceedling . Update 06Jul2016 Came across cmocka - seems to be more actively worked on.
I would abstract away from the hardware dependencies at the earliest possible step, and build the system on software emulation/test harnesses, enabling all sorts of test frameworks. Often my developement PC was used to test as much as 95% or more of the complete system. The cost of the extra overhead (another layer of abstraction) was easily won back by the cleaner code generated as a result of that abstraction. The testing of the truely baremetal parts of an embedded system is usually a seperate application (Unit test?) that hammers the firmware well beyond what the applications can even hope to achieve. Automation can be done - at a cost, but is not typical. Unless, that is, you have the budget to build a unit test hardware harness including full ICE. This is absolutly fine as generally the functional tests are small.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104332", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20579/" ] }
104,379
This is not an opening gambit for RoR bashing - honest! I'm learning Ruby and the Rails framework. Prima facie it appears to be pretty cool, and a wonderful experience compared to PHP. (In fact, it's reminding me of happier days with C# and .NET.) However, going into this, I have no experience with this framework or language, and I'm curious - what are the current downsides or things you'd wish you'd known when you were beginning? (Maybe this should be made a community wiki?)
This is from experience learning, continuing to learn, and writing a relatively simple application in Rails. 1) Learning Curve Rails is deceptively simple. The tutorials, videos, and books all demonstrate how quick you can get a working (if ugly) application, but these really just scratch the surface. They tend to heavily rely on code generation and "scaffolding" which admittedly is a good tool when learning but quickly outlives its usefulness. Make no mistake, Rails is hard to master. Once you get past the very basics (more on this later) you will run headlong into a wall if you need to do more than the extremely simplistic "demo app" functionality that you see touted. You can get by with a basic knowledge of Ruby while learning, but you quickly need to pick up Ruby or you'll be left high and dry (and not the good kind of DRY ) if you need to go outside the Rails constraints. Rails is, as I like to call it in a loving way, paint by numbers programming . If you stick 100% to the conventions (i.e. stay within the lines and use the colors you're told to use) you can make decent applications quickly and easily. If and when you have to deviate though, Rails can go from your best friend to your worst enemy. 2) When All You Have Is a Hammer... Rails does simplistic CRUD applications very well. The problem comes when your app has to do more than just read/write from a database. Now, for the record the last Rails version I used was 2.3.4 so things may have changed since then, but I ran into major issues when business requirements changed so the application had to have a small workflow system built into it, and integrate with a legacy PHP application. The Rails convention of "one form, one model" works fine for trivial apps and data entry applications, but not so much when you need to do processing logic, or have workflows, or anything that isn't the typical "User enters data into a few text fields, hits Submit" type of thing. It can be done, but it's by no means "easy", or rather it wasn't when I last used Rails. Also, Rails does not like to play well with other applications that aren't using it's preferred methods of data access; if you have to interface with an application that doesn't have a "Web 2.0" style API, you have to work around Rails instead of with it; again I speak from experience here as this is what happened to me. 3) It's New Finally, Rails is still the "new kid on the block" in many areas. This doesn't matter for personal use or "I think it's cool and want to learn it" type of scenarios, but speaking as someone who would prefer to use Rails at my day job, if you aren't in a location where Rails is widespread, it can be very difficult to find fulltime work as a Rails developer. It's still largely the domain of "hip, new startups" and not a major player in most metropolitan areas. Your mileage may vary in this regard, but I know my area (Tampa) Rails is essentially nonexistent. 4) Fire and Motion Rails is ever-changing. This is both a good and a bad thing; it's good because the community evolves and embraces new concepts. It's bad because the community evolves and embraces new concepts. It can be very overwhelming for a Rails newbie because typically when you run into an issue, and look around, you'll see either people recommending such-and-such gem to fix it, or saying that way is bad anyways and you shouldn't use it, here's a better way... and you end up having a laundry list of additional tools to learn along with Rails to keep up with the Rails cognoscenti. Things like Git , BDD/RSpec , Cucumber , Haml/Sass , and a cornucopia of other things all float around and get pushed as the "right way to do things" in Rails-land, and speaking from experience you may end up being swamped trying to learn a dozen or more technologies in addition to Rails, because using the standard Rails toolkit feels "wrong". This is now compounded even more by Rails 3.1 making Sass and CoffeeScript of all things the default, so a total Rails newbie not only has to learn Ruby and Rails but Sass (arguably simple if you know CSS) and CoffeeScript (not crazy difficult but certainly different enough from raw JavaScript) at a bare minimum to get started, plus it can be assumed Git. Even without factoring in RSpec and friends, and the dozen or more gems that you'll typically end up with, that's 4 different things you have to learn before you can seriously begin to write Rails applications. Compare this to a language like C#, or Java, or even PHP where your HTML/CSS/JavaScript/SQL knowledge isn't going to change and you just have to learn the language itself and perhaps the framework nuances.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104379", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28114/" ] }
104,386
I have developed a habit of writing comments in my code by putting the comments on the same line as the opening brace, after the brace. I've found that this saves vertical space. It also leaves a hint why something was done, but I'm wondering if it's readable for others. Example: void DoSomeInterestingImageManipulation(char *pImage) {//This will convert the image to formatABC which allows x% space savings for storage if(pImage && pImage[0] == 0xFF) {//Process the extra case where image internal format needs decompression ++pImage; //... //... //... } //Proceed normally *pResult = Foo(pImage); } Do you consider it easier to read or harder to read?
I would not recommend that style as it makes it difficult to scan for braces. void DoSomeInterestingImageManipulation(char *pImage) {//This will convert the image to formatABC which allows x% space savings for storage if(pImage && pImage[0] == 0xFF) {//Process the extra case where image internal format needs decompression ++pImage; //... //... //... } //Proceed normally *pResult = Foo(pImage); } Look at the end braces and tell me where the block begins. Now do the same for this one: void DoSomeInterestingImageManipulation(char *pImage) { //This will convert the image to formatABC which allows x% space savings for storage if(pImage && pImage[0] == 0xFF) { //Process the extra case where image internal format needs decompression ++pImage; //... //... //... } //Proceed normally *pResult = Foo(pImage); } Can you tell me where the start more easily?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104386", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8029/" ] }
104,406
In different design books that I read, sometimes big emphasis is put on the number of methods that a class must have (considering an OO language, as java or C# for instance). Often the examples reported in those books are very neat and simple, but rarely they cover a "serious" or complex case. However the range seems to be between 5 and 8. In a project I developed a class "Note", with its attribuse as properties: Title, Desctiption, CreateDate, etc. Then some basic methods like: getRelations (if the note is assigned to different documents), getExpiryDate, ect. However proceeding in the development of the application, more functionalities were required, and, therefore, more methods. I know that the less methods a class has, the more loosly coupled it is. That is indeed a good advantage in terms of modularity and reusability, plus easier to edit. By the way if in our context there is no need (or even sense) to create sub-classes and all the needed functions are related to that class, how many methods can we further attach? I agree that having more than 15 methods, then maybe a little re-design might be required. But even in that case, if deleting some of the methods or inheritance is not an option, which would be the proper way?
Have as many methods as you need. I would try to keep down the number of public methods to that 5-8 rule if possible. Honestly, most people have the opposite problem where they have crazy super-methods that need to be broken out more, not less. It really does not matter how many private helper methods you have. In fact, if you stayed below 8 methods in Java you could hit the limit with a class that only had a constructor, a toString, and the getter/setter for 3 properties...which is not exactly a robust class. The bottom line is, do not worry about how many methods your class is. Worry about making sure your class does not take on unrelated concerns, and that you have a reasonable public interface that has easy to understand methods.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104406", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29557/" ] }
104,407
So, I have this big project, which is in the process of being refactored by me. I am changing a lot of stuff, so there is no chance to get it to compile some time soon. I am living in a special git branch which I named cleanup (which is going to get merged into master eventually, of course). The problem is, that I/we have the policy to never commit non-compiling code (ideally it should also work, but it must compile and link, at the very least). So, until I am finished with this huge task, I am unable to commit anything (for review or for bookkeeping). This is not the way I like to work (I believe most people commit at least once a day or so). What do you think? Is there a solution I am overlooking? Can I later tell git to aggregate commits or something? I could live with non-compiling commit as long as they stay in the cleanup branch. Edit To the subject of pushing/committing: I am aware that it is a huge difference, but later, there will be broken revisions, when I merge my stuff into master . So if you browse through the history (or git bisect ...) then the "local" revisions will be world accessible. So only committing locally and not pushing is not the best solution, because it will cause you trouble later on (when the subject is closed and forgotten for some time). In short: Local commits will be pushed eventually. The global history should not show non-compiling commits.
The git merge --squash command allows you to create a single commit on top of the current branch whose effect is the same as merging another branch. The command updates the working tree and stages the changes in the index, so all you have to do next is commit: git checkout master git merge --squash cleanup git commit -m "Merge cleanup branch" The git rebase -i command can also squash commits but requires more work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104407", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8977/" ] }
104,459
In web development the terms "server" and "client" are often refered to when discussing how pages are requested on the web. It is also used exstensively when talking about PC, phone, tablets, etc. The question that came to my mind was what qualified a computer to be considered a client? I took a look at this article from wikipedia and noticed that clients are considered service requesters whereas Servers are considered service providers . If this is true couldn't all computers that request a service in a sense be considered a client? The stereotype I noticed is that a client computer is usually synonymous with a "consumer" computer, but according to wikipedia this is not entirely accurate. Any idea on how to draw the line in this common scenario?
Clients request data from servers. Does this help? Also, clients and servers are processes (in this context), not computers. If you point your browser to http://localhost and serve a webpage from your machine, your computer acts as both client and server. The client is your web browser; the server is Apache, NginX (or IIS), which serves the page.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104459", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22218/" ] }
104,598
I'm a dev at one of the big-name tech companies. I like the job for many reasons: I do interesting work on a cool product I solve challenging problems and use a lot of high-level skills (quantitative, creative, writing, presenting) It pays well The problem is that I feel I need a more relaxed atmosphere (shorter hours, less performance pressure, and more flexibility), in order to free up time for other pursuits and reduce stress. The ideal would be a job that's around 30-35 hours a week, where there is flexibility to work more or less in a given week. Can anyone suggest where to look for a job like this, where I wouldn't have to sacrifice too much on the above points? (Obviously I would have to sacrifice pay.) My employer does not generally offer part-time employment. The closest thing I can think of is when I did summer internships at my university's CS department. The work was very intellectually challenging, but if I needed to go home a couple hours early or get flexibility on a due date, nobody batted an eyelash. However, I'd like to find out if there are alternatives to academia since from what I've seen the pay there is a gigantic drop from what I'm currently making. I've done freelance development before, but I do like that as an employee of a large company I have a lot of things taken care of for me (e.g. benefits and guaranteed stable employment).
I'm doing 30hrs/week jobs for more than a decade now. In my experience you will not find a niche in the industry where part-time jobs are waiting for you to grab one. Instead, you will have to carve such a job out of the common job market. That's not easy, because many only bargain for money when they interview, so companies are not used to employers wanting to work less, but it's not impossible. I have found the following important when looking for a part-time job: Be good at what you do. When you are good, they will want you, and are prepared to pay for it. Some candidates will want more money, some will want more holidays, a few will want fewer working hours. In an interview, explicitly ask about a company's overtime policy . Is overtime something normal at the shop, that's done by everyone regularly? Unpaid? If so, you will be unlikely to really be working less than 40hrs, no matter what contract you sign. If you have the feeling they might be hard to convince to let you work 30hrs, start out offering less (20hrs?) and then let yourself be "persuaded" to work 30hrs. :) I did this with my first part-time job. Don't expect too much pay or other benefits when you first do this. Under these conditions you are closer to a junior job than you used to be. Once you can show excellent references for two or three such jobs over the last decade it will be easier to convince employers that your special needs are worth the hassle. Once you have such a job, be sure to follow these rules: I usually explain upfront, right in the interview, that I am never working for free . I clock every hour I work for the company, and certainly clock overtimes I put in, and I expect to take leave the same amount of hours for compensation. (I have, twice over more than a decade, accepted money instead. But that was me accepting it, rather than them pressing me to do it.) Do not let them press you into doing more hours without compensation. You might have relinquished other benefits (like money) for doing 30hrs/week. There is no point in relinquishing what you got for that. (The others wouldn't give up that money even if pressed hard, right?) Remind everyone that you only have 75% of the time others have. Make sure that your team leader, when planning resources , remembers that. If they have never worked with such an employee, then this will need constant reminding in the beginning. We all know that in this industry crunch time is a common phenomena. When others work overtime, you might have to do that, too. However, be sure to make it absolutely clear that for you, 40hrs/week already is overtime .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104598", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35584/" ] }
104,900
When you arrive in the morning, you find that your software does not work anymore, even though it did when you left yesterday evening. What do you do? What do you check first? What do you do to stop being angry and start working on your problem? Do you blame your colleagues and go directly to them? What can be done to avoid being in such a situation?
The usual suspects are: You thought it worked yesterday, but after a full day of work you were too blind to realize that it didn't work. This morning you no longer can refer to what was in IDE cache memory yesterday. The workstation has rebooted last night or a nightly maintenance operation cleared /tmp directories. Something has changed in the code base: check whether someone (possibly yourself) has commited changes between your last compile of yesterday and your last compile of today. Something has changed in the support libraries: check whether those libraries have been recompiled or upgraded. The cause may be inside the project for specific libraries or outside if a new version of an apparently independent package has been deployed. Something has changed in the testing environment: new version of a virtual machine, a stub that has been modified, changes in a remote database server... Something has changed in the compilation chain: changes in Makefiles, new version of IDE, of compiler, of standard libraries...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104900", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16868/" ] }
104,928
Background of my working environment My manager has no background or understanding of computers or software whatsoever. It is highly likely he hasn't seen code in any form (not even from a physical distance of 10 feet or less) in his life. There is no one who understands the complexity of what I am asked to implement. To the point that if I semi-hardcode no one would know. On Joel's test we score an unbelievable score 0. The problems The manager and at times other "senior" keep changing the requirement specification. Changes which, if good engineering be done and not patchy "fixes", require change in the underlying design. There is absolutely no one who looks at code (probably because no one knows how to, or even if it should be done) which means no one will ever be able to: Appreciate the complexity of the problem or the elegance of the solution. Suggest improvement to the approach. Appreciate the quality of the code. Point out where the code can be improved. A lot of jargon is used which makes sense grammatically but fails to make any sense any other way. Doesn't feel, behave or work like a software company. The question What should be done? Especially regarding there being no one who would point out improvements in my code. Update To answer HLGEM's (and possibly others) question about what I've done to try and fix it. I offered to set up Redmine and introduce source control to everyone. I said I would recommend distributed (git or mercurial) but will also talk about centralized ones and let the team decide. Response was that things are being done and will be done within weeks. Haven't seen that nor am I aware if other parts of the company use it.
The short version : Run. The somewhat longer version : If the manager doesn't know how to run a project, and if the senior goes along with it, then you have next to no chance of fixing things. In order to manage software projects, a manager does need to understand something about software. If managers don't, they need to learn first. What are your chances you could persuade your management and your senior(s) that they got it all wrong? What are the chances you will teach them something? I have been in a similar situation once (only there was no senior). I quit after a terrible year, and never looked back (except in disgust).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/104928", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5940/" ] }
105,130
Is it a bad sign if users submit bug reports for things that are by design? Does it typically mean that the application is confusing or unclear, or should I just chalk it up to a one-off user mistake unless specifically stated? (I don't actually have any such reports. This is a purely hypothetical question about whether or not the existence of by-design "bugs" is a bad thing.)
Is it a bad sign? I think it's a warning that's worth looking into, but I also think it's bound to happen. When people submit any kind of feedback to me, I try to filter it into three buckets: Bugs Feature Requests Mis-communication Bugs Bugs are when something obviously doesn't work the way you would expect, nor the way the user would expect. Like, it asked me for my name, I entered "Scott", hit enter, and it said, "Hi Joe!" Feature Requests This is like "I know we never talked about this, but can the program infer from my mouse gestures that I'm left-handed and move the OK button to the left side of the screen?" This is when the current behaviour matches both your and the user's expectations, but they want to change the expectation. Mis-communication This is when you would expect one outcome from a scenario, but the user expects a different outcome. Sometimes this becomes a feature request, if they just haven't communicated their expectations, but they thought they did. Sometimes this becomes a bug if your expectation is proven to be wrong. However, many times you have knowledge that the user doesn't have. What if they said, "On this screen, I can add a record for myself twice with the same first and last name! That's obviously a bug!" Your response might be, "There are lots of people in the world with the same first and last name, so we don't require that combination to be unique. We have a cleanup task that runs at night and emails a Possible Duplicates Report to customer service when it thinks it detects a duplicate with a similar name and address, and asks them to check it manually." So you should read every bug report, but most complex systems are going to have bug reports that are really just feature requests, or possibly a mis-communication of the requirements. Not understanding the underlying complexity of the real world is probably the biggest source of these issues.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105130", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8810/" ] }
105,158
Not a trial, but a truly free version that they offer users to download.... I was wondering in case I somehow missed the option on their website. Any version of Visual Studio will do. If not, are there any open-source alternatives (that have similar functionality, or possibly degraded equivalent) of Visual Studio? I am running on Windows 7 if that makes any difference...
Visual Studio Express is a set of freeware integrated development environments (IDE) developed by Microsoft that are lightweight versions of the Microsoft Visual Studio product line. A comparison is available here . If you are a student you may want to take a look at DreamSpark .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105158", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35740/" ] }
105,191
Is it appropriate to store the image files in the database? Or it would be better to store only the path of the file in the database, while keeping the file itself on the server? Are there any other methods for doing this right?
I strongly advise you to store the images in the filesystem and not in the database. Storing images in the database has several disadvantages: The database might grow unexpectedly large. Sometimes space is an issue. For example with SQLServer express you have a 4GB limit. Data migrations can become a pain, for example if you switch from SQLServer to Oracle Queries can become very slow and you'll have a high database load Interoperability with other applications is better if the images are on the filesystem and other applications use a different database. You can also access them directly and do not need database tools. Worse performance in general You'll probably have to create temporary files when retrieving the images from the database anyway. That's unnecessary. These disadvantages far outweigh the cost of keeping the paths to the images stored in the database synchronized with the filesystem. There're only few special cases in which it's better to store the images in the database.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105191", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/738/" ] }
105,313
Stroustrup claims that Cfront, the first C++ compiler, was written in C++ ( Stroustrup FAQ ). However, how is it even possible that the first C++ compiler be written in C++? The code that makes up the compiler needs to be compiled too, and thus the first C++ compiler couldn't have been written in C++, could it?
The key is right here: The first C++ compiler (Cfront) was written in C++. To build that, I first used C to write a "C with Classes"-to-C preprocessor. "C with Classes" was a C dialect that became the immediate ancestor to C++. That preprocessor translated "C with Classes" constructs (such as classes and constructors) into C. It was a traditional preprocessor that didn't understand all of the language, left most of the type checking for the C compiler to do, and translated individual constructs without complete knowledge. I then wrote the first version of Cfront in "C with Classes". So the first version of Cfront wasn't written in C++, rather in the intermediate language. The ability to create C compilers and preprocessors directly in C led to many of the innovations (and massive security holes ) in C. So you write your new preprosessor that turns your "C with Classes" code into straight C (because straight C can do anything) and then you use "C with Classes" to write a C++ compiler (not that you couldn't do it in C, just it would take awhile) and then you use that C++ compiler to write a more effecient/complete compiler in C++. Got it?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105313", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24257/" ] }
105,344
The Open Source Initiative lists 9 different licenses in their list of "License that are popular and widely used or with strong communities" . I want to license my project as open-source. Unfortunately, I do not speak legalese. Is there some chart I could consult that will help me make the right choice, or at least point me in the right direction? For example a table summarizing the differences between the licenses, or perhaps a flow-graph using my requirements to guide me into the correct license for me? I also intend to meet a lawyer, but any information to start with will help.
Jeff Atwood has done a pretty good job explaining the differences among the multitude of Open Source software licenses in plain English here: http://www.codinghorror.com/blog/2007/04/pick-a-license-any-license.html The most important consideration affecting your decision will be your redistribution terms. That is, will you allow your code to be used in commercial applications and if so, will you require such applications to open-source their own code? This is where the GPL is notable: If you license your code under the GPL, anyone using your code must also license their code under the GPL. Since the GPL requires that all of your code be open-source, this pretty much excludes its use in companies that wish to keep their code proprietary. Note that the GPL does allow you to use GPL'd code for in-house business applications, so long as you do not redistribute those applications to a third party. See Also http://haacked.com/archive/2007/04/04/there-are-only-four-software-licenses.aspx
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105344", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8331/" ] }
105,352
It seems everyone doing web applications nowadays wants to use MVC for everything. I find it hard to convince myself to use this pattern, however. I understand the general idea is to separate the backend logic from the frontend that represents the program. Generally, it seems that the views always depend on the controller to some extent, which ends up depending on the model. I don't see what advantage adding the controller gets me. I've read a lot of hype about "this is the way applications should be designed", but maybe I still don't understand what is supposed to go where. Whenever I talk to others about MVC it seems everyone has a different idea of what belongs in what category. So, why should I use MVC? What do I gain by using MVC over just separating the frontend from the backend logic? (Most "advantages" I see of this pattern are gained just by separating interface from implementation, and fail to explain the purpose of having a separate "controller")
Heh. Martin Fowler agrees with your confusion about MVC: I don't find it terribly useful to think of MVC as a pattern because it contains quite a few different ideas. Different people reading about MVC in different places take different ideas from it and describe these as 'MVC'. If this doesn't cause enough confusion you then get the effect of misunderstandings of MVC that develop through a system of Chinese whispers. However, he goes on to give one of the more cogent explanations of what motivates MVC: At the heart of MVC is what I call Separated Presentation. The idea behind Separated Presentation is to make a clear division between domain objects that model our perception of the real world, and presentation objects that are the GUI elements we see on the screen. Domain objects should be completely self contained and work without reference to the presentation, they should also be able to support multiple presentations, possibly simultaneously. You can read Fowler's entire article here .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105352", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/886/" ] }
105,412
With a team of 3 other web developers i have the title as lead web developer for a year now. This is my first job as a lead. I'm pretty set on what my roles are from management. I'm curious what other senior level developers do. I primarily curious as to what others people responsibilities are as the lead/senior developer in other organizations; as I've only encountered working in a small/medium company. (a) What would one expect out of a senior / lead web developer of an organization (regardless of size)? (b) Is there a difference between web development leader and senior web developer? I reviewed some threads and there was just one that discussed when you should call yourself a senior developer but doesn't comprehensively discuss the roles of what a senior developer should do with his/her team.
Project manager's point of view You are the single (or default) point of contact for anything related to the technical side of things. You are expected to keep the work of the other developers' moving by sheer force, leading by example, or whatever your method is. Not-lead developer's point of view You are the role model. Expect the lesser-experienced developers to look up to you and to ask you technical questions when they're stuck. Think If you're really asking the internet to define your role/job, stop. Talk to management for a real answer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105412", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28477/" ] }
105,591
You know them, those errors that make NO sense. Where it seems like a gremlin just jumped deep inside your chips and messed up something. Do you take a walk, write stuff, call an uncle?
Quit. No, not your job! Just get up and go home. You're done for the day or the weekend. 19 times out of 20 when you come back to the problem next, the solution will present itself within an hour.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105591", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29032/" ] }
105,645
I just started an AI & Data Mining class, and the book. AI Application Programming, starts off with an overview of the history of AI. The first chapter deals with the history of AI from the 1940s to present. One particular statement stuck out at me: [In the 60s] AI engineers overpromised and underdelivered... What was the reason for the overconfidence? Was it because of mathematical prediction models showing that a breakthrough was around the corner, or due to the ever-increasing hardware capability to take advantage of?
My personal opinion is that it was due to hubris . There were some mighty big egos walking the halls of MIT, Stanford, etc. back in the 60s and 70s and they just knew they had cracked this problem. Right. Although I wasn't part of that universe in those days, in the mid-to-late 80s I was working with similarity searching. Our work was initially based on research done by Gerard Salton at Cornell in the 60s, which used weighted attribute vectors to represent documents and queries. It actually was a useable approach, but when neural nets went down in flames (at least until they discovered back propagation ), Salton's work was included with it because of similarities (pun intended) to neural nets. He was trying to do something different, but there were several years where he was lumped in with the rest. Every time someone comes up with a solution for the Current Brick Wall™ they get very excited and declare AI to be a solved problem. Only it's not. Because behind that brick wall is another one. This cycle has repeated over, and over, and over again, and not just in AI. I firmly believe that all prospective computer scientists and engineers should be required to take a semester-long class in the History of Computing, with special emphasis on the number of Next Big Things™ that went up like rockets ... and then made a very large crater in the valley floor. Addendum: I spent the Labor Day weekend with an old friend and we talked a little about this. Context — figuring out what that means, how to represent it, and then how to use it — emerged as possibly the single biggest hurdle to be cleared. And the longer you look at it, the bigger a hurdle it becomes. Humans are capable of amazing, near-instantaneous partial-pattern matching of "what is happening" against a vast store of "what has happened before," and then combining that knowledge of the past with the present situation to create a context in which understanding can lead to action. For example, we can use it as a powerful filter of "things we can/can't ignore" as we whiz down the Waldo Grade at 60 MPH with traffic 4 lanes abreast and separated by only 3 or 4 feet (or less!). On the spectrum of stuff > data > information > knowledge > understanding > judgement we are still straining to get to the information/knowledge steps, and even that is limited to highly constrained domains of discourse .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105645", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7073/" ] }
105,786
I have multiple applications some that use data from the same sources. Is it best practice (or what are the pros/cons) to: leave the data in databases shared by multiple applications saves space as only one database is needed complicates indexing as different applications have different querying needs import data daily into per-app databases uses more space as duplicated data exists in per-app databases easier indexing as each app can focus on its individual needs I may have left out other advantages/disadvantages, please list if any, also how is this done at your workplace?
Space is cheap these days, so I'd advise to use one database per application. Sharing one database for amongst multiple applications has some serious disadvantages : The more applications use the same database, the more likely it is that you hit performance bottlenecks and that you can't easily scale the load as desired . SQL Databases don't really scale. You can buy bigger machines but they do not scale well in clusters! Maintenance and development costs can increase : Development is harder if an application needs to use database structures which aren't suited for the task at hand but have to be used as they are already present. It's also likely that adjustments of one application will have side effects on other applications ("why is there such an unecessary trigger??!"/"We don't need that data anymore!"). It's already hard with one database for a single application, when the developers don't/can't know all the use-cases. Administration becomes harder: Which object belongs to which application? Chaos rising. Where do I have to look for my data? Which user is allowed to interact with which objects? What can I grant whom? Upgrading: You'll need a version that is the lowest common denominator for all applications using it. That means that certain applications won't be able to use powerful features. You'll have to stick with older versions. It also increases development costs a bit. Concurrency: Can you really be sure that there're no chronological dependencies between processes? What if one application modifies data that is outdated or should've been altered by another application first? What about different applications working on the same tables concurrently? Compared to that, data imports/ETL-processes are almost always pretty straightforward and simple. Load the data as often as you need to, space is cheap. You can account for scalability for each application independently, adjust and tweak the structures as you need them and there won't be concurrency issues. Side effects can be traced much easier, too. Edit: I'd like to point out, though, that as @Saeed mentioned, if you can encapsulate data manipulations in a service which is commonly available, then it's easier to share one database with multiple applications. As long as you don't need raw access that is a very good approach.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105786", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5511/" ] }
105,827
Well, I love C++, I have been using it for a while: I like all the libraries (Allegro, SDL, QT, Ogre, etc.), but I have a problem: I don't understand pointers. Do I really need them ? I just program for fun: but I want to study it some day. Thanks.
Yes, definitely. They are a fundamental concept of programming, no matter if you program in a language that supports direct pointer management or not, but even more so if you do.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105827", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30815/" ] }
105,851
One of the items in Joshua Bloch's Effective Java is the notion that classes should allow mutation of instances as little as possible, and preferably not at all. Oftentimes, the data of an object is persisted to a database of some form. This has led me to thinking about the idea of immutability within a database, especially for those tables that represent a single entity within a larger system. Something I have been experimenting with recently is the idea of trying to minimize the updates I do to table rows representing these objects, and trying to perform inserts instead as much as I can. A concrete example of something I was experimenting with recently. If I know I might append a record with additional data later on, I'll create another table to represent that, sort of like the following two table definitions: create table myObj (id integer, ...other_data... not null); create table myObjSuppliment (id integer, myObjId integer, ...more_data... not null); It is hopefully obvious that these names are not verbatim, but just to demonstrate the idea. Is this a reasonable approach to data persistence modeling? Is it worth trying to limit updates performed on a table, especially for filling in nulls for data that might not exist when the record is originally created? Are there times when an approach like this might cause severe pain later on?
The primary purpose of immutability is to ensure that there's no instant in time when the data in memory is in an invalid state. (The other is because mathematical notations are mostly static, and so immutable things are easier to conceptualize and model mathematically.) In memory, if another thread tries to read or write data while it's being worked with, it might end up going corrupt, or it might itself be in a corrupt state. If you have multiple assignment operations to an object's fields, in a multithreaded application, another thread might try to work with it sometime in between -- which could be bad. Immutability remedies this by first writing all the changes to a new place in memory, and then doing the final assignment as one fell-swoop step of rewriting the pointer to the object to point to the new object -- which on all CPUs is an atomic operation. Databases do the same thing using atomic transactions : when you start a transaction, it writes all the new updates to a new place on disk. When you finish the transaction, it changes the pointer on disk to where the new updates are -- which it does in a short instant during which other processes can't touch it. This is also the exact same thing as your idea of creating new tables, except more automatic and more flexible. So to answer your question, yes, immutability is good in databases, but no, you don't need to make separate tables just for that purpose; you can just use whatever atomic transaction commands are available for your database system.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105851", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32219/" ] }
105,902
Why did the C master Dennis Ritchie introduce pointers in C? And why did the other programming languages like VB.NET or Java or C# eliminate them? I have found some points in Google, and I want to listen your comments too. Why are they eliminating pointer concepts in modern languages? People say C is the basic language and pointers is the concept that makes C powerful and outstanding and makes C still to compete with more modern languages. Then why did they eliminate pointers in more modern languages? Do you think knowledge of pointers is still important for new programmers? People are using VB.NET or Java these days, which supports more highly advanced features than C (and does not use any pointer concepts) and many people as I see now (my friends) choose these languages ignoring C as they support advanced features. I tell them to start with C. They say it's a waste to learn the concepts of pointers when you're doing the advanced things in VB.NET or Java which are not possible in C. What do you think? Updated : The comments I read on Google are: The earlier computers were too slow and not optimized. Using pointers makes it possible to access an address directly and this saves time instead of making a copy of it in function calls. Security is significantly worse using pointers, and that's why Java and C# did not include them. These and some more what I found. I still need some valuable answers. That would be greatly appreciated.
Back in those days, developers were working much closer to the metal. C was essentially a higher level replacement for assembly, which is almost as close to the hardware as you can get, so it was natural you needed pointers to be efficient in solving coding problems. However, pointers are sharp tools, which can cause great damage if used carelessly. Also, direct use of pointers open up the possibility to many security problems, which weren't an issue back then (in 1970, the internet consisted of about a few dozen machines across a couple of universities, and it was not even called like that...), but became more and more important since. So nowadays higher level languages are consciously designed to avoid raw memory pointers. Saying that "advanced things done in VB.Net or Java are not possible in C" shows a very limited point of view, to say the least :-) First of all, all of these languages (even assembly) are Turing complete so in theory whatever is possible in one language, is possible in all. Just think about what happens when a piece of VB.Net or Java code is compiled and executed: eventually, it is translated into (or mapped to) machine code, because that is the only thing which the machine understands. In compiled languages like C and C++, you can actually get the full body of machine code equivalent to the original higher level source code, as one or more executable files/libraries. In VM based languages, it is more tricky (and may not even be possible) to get the entire equivalent machine code representation of your program, but still eventually it is there somewhere, within the deep recesses of the runtime system and the JIT. Now, of course, it is an entirely different question whether some solution is feasible in a specific language. No sensible developer would start writing a web app in assembly :-) But it is useful to bear in mind that most or all of those higher level languages are built on top of a huge amount of runtime and class library code, a large chunk of which is implemented in a lower level language, typically in C. So to get to the question, Do you think knowledge on pointers to the young people [...] is important? The concept behind pointers is indirection . This is a very important concept and IMHO every good programmer should grasp it on a certain level. Even if someone is working solely with higher level languages, indirection and references are still important. Failing to understand this means being unable to use a whole class of very potent tools, seriously limiting one's problem solving ability in the long run. So my answer is yes, if you want to become a truly good programmer, you must understand pointers too (as well as recursion - this is the other typical stumbling block for budding developers). You may not need to start with it - I don't think C is optimal as a first language nowadays. But at some point one should get familiar with indirection. Without it, we can never understand how the tools, libraries and frameworks we are using actually work. And a craftsman who doesn't understand how his/her tools work is a very limited one. Fair enough, one may get a grasp of it in higher level programming languages too. One good litmus test is correctly implementing a doubly linked list - if you can do it in your favourite language, you can claim you understand indirection well enough. But if not for anything else, we should do it to learn respect for the programmers of old who managed to build unbelievable things using the ridiculously simple tools they had (compared to what we have now). We are all standing on the shoulders of giants, and it does good to us to acknowledge this, rather than pretending we are the giants ourselves.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105902", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35064/" ] }
105,912
Is it possible to alter the code of the Chili plugin , which had its latest release on July 2008, and it is licensed under the MIT license, to then license it under GPL? As far I can see, there is no restriction about the new code being licensed under the same license. Is it really so, or is there a minimum number of changes? In my case, I would change the jQuery plugin in normal Javascript code that is executed in a CMS. This essentially means that, among other things: The code will not use the "ChiliBook" namespace. The function will not be invoked as $($element).chili() , but as GlobalObject.ChiliHighlighter.process($jquery_element) , where "GlobalObject" is a JavaScript object used from the CMS. The code will allow other modules to alter the GlobalObject.ChiliHighlighter object to add functions that are optionally called from GlobalObject.ChiliHighlighter.process() when they are defined. As alternative, as the repository I am using allows me to include code not licensed under GPL 2 or higher license when the code is not maintained anymore, could the plugin be considered not maintained anymore, as its last version was released three years ago?
It's technically legal. The MIT (Expat) license places a few restrictions on you. These are a subset of the GPL license. Therefore, if you relicense the code under the GPL, and keep the MIT notice, then you've satisfied the terms of the MIT license and may legally redistribute the code. Note that you may not claim copyright ownership; you'll have to acknowledge the original copyright. [edit] Some people don't seem to understand how F/OSS works in conjunction with copyright and license law. Everything starts with copyright, if only because that's the default. Under the copyright doctrine, the author gets the right to make copies of source code. Under the MIT license, that right is granted to me, as well as the right to recursively grant it to others. Note that the MIT license explicitly includes the right to sublicense. Quoting: "the rights to use, copy, modify, merge, publish,distribute, sublicense, and/or sell" When I sublicense code, I cannot grant rights that I didn't originally have. In the case of the GPL, I am explicitly forbidden to sublicense only some rights. But neither in law nor in the MIT license do I have an obligation to sublicense all rights as a whole. Therefore, the MIT license grants me the explicit right to sublicense rights, and neither the law nor the MIT license prohibits me to sublicense only some rights. Also, neither restricts the form in which I do. Therefore, I have the undeniable right to grant a GPL sublicense on that code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/105912", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44/" ] }
106,004
tl;dr: What is a less extreme (but still noticeable) alternative to the word "fluent", when saying e.g. "I am fluent in C++/Python/whatever?" I think I can call myself "fluent" in C#, because I know the language and runtime very well, and I'm very familiar with the .NET framework's APIs and classes, etc. I would like to claim the same thing for Python and C++. But while I can program in Python (I did so for an entire summer, making a website with Django), for example, I would not call myself fluent because my code isn't always "Pythonic" (e.g. using map / filter vs. list comprehensions), and I'm not too intimate with some aspects of the language and standard library yet (e.g. the introspection API, etc.). Is there a word or phrase I can use on e.g. a resume to describe what I know? I can think of "very familiar with", but is there a better word/phrase I can use?
Would "proficient" be useful, if not that, "competent". Both words suggesting a comfort with tasks given within a certain range.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106004", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11833/" ] }
106,031
Lately I've been developing a web-based management system for a gym. Their previous app was developed in Visual Basic. For the new app, all the front-end scripting uses jQuery, the server is running PHP & MySQL... you know, the typical el-cheapo linux based stack. Anyways, I was wondering why JavaScript's prompt, confirm and alert message dialogs are so underused nowadays. There a lot of jQuery plugins for modal windows and alert messages that try to mimic something that the language already offers and every browser is forced to support properly. Using hand-made or plugin-based solutions is ok for complex forms and special needs, but if you just need to ask for a single value or confirm the deletion of a record, why would you add such overhead to your web app? Furthermore, iOS and Android offer nicely adapted message dialogs when you use prompt, confirm and alert.
1. UX: message boxes are mostly evil Alert boxes are bad in all cases from the UX point of view. In desktop apps. In web apps as alerts or inline JavaScript messages. Everywhere. You can read About Face 3 by Alan Cooper¹ if you want to know why; it explains very well how does this interrupt the workflow and annoys the user, and how nearly every alert box which exists in current software is deeply wrong . On page 542, "The dialog that cried “Wolf!”" explains that alert boxes are dismissed routinely, so their model is completely broken. On page 543 of the book are listed three major design principles: Do, don't ask. Make all actions reversible. Provide modeless feedback to help users avoid mistakes. Then, the authors tell us how to replace the alert boxes by the correct design approach. Prompt messages are slightly different. And still, they break the user experience of your app. If you want the user to enter something, consider using a textbox or a textarea, decorating it with JavaScript when need. Don't be lazy, provide a rich interface in an era of RIA and AJAX-enabled apps ; in all cases, if JavaScript is disabled, your prompt will not be displayed. In web pages, both alert boxes and prompts are mostly annoying. Some examples: Some forums let you create lists by prompting infinitely for the list items. It means that while creating the list, you cannot use the page itself, including copy-paste. You also have a single tiny field. What about long text? What about bold and italics? "If you continue, the photo will be removed definitively from your profile. Are you sure?" . Of course I am sure! Would I click "Remove photo from my profile" otherwise? Why your web app supposing I'm so stupid? Actually, Google applications as GMail show the correct approach. You can remove, delete, destroy whatever you want, and when you do so, the app displays a small "Undo" link. "Do you want to take our greatest survey?" . Well, actually I was there to visit your website, but since you bother me with your annoying messages, I would rather go somewhere else. "The right click is disabled on this website in order to protect copyrighted photos" . Well, actually I right-clicked to change the language of spell checker before sending my comment. Sure, I'll send it without checking the spelling. Conclusion: from user experience point of view, the applications use message boxes wrongly most of the time. But wait! Many low-quality websites replace annoying alert boxes by annoying JQuery messages with a semi-transparent background which covers all the page. So the drawbacks remain. Well, there are another reasons not to use message boxes in web applications : 2. Design: alert boxes have their own design You can't design an alert box at all. You can't change its color, its size, its font. This makes it even more annoying for the user: you were working with a web app, and your workflow is broken by a message which seems to come from nowhere and does not even match the visual aspect of the app. Not counting that the language of the buttons also match the OS/browser language, not the web application one. For designers, JavaScript messages are much more powerful than the alert boxes. They are also much more extensive. You can add bold and italic, you can choose your own buttons (what about: "We apologize but the password you entered is invalid. [Reset my password] [Try another one] [Cancel]"?)². 3. JavaScript: application flow stops When displaying an alert box, JavaScript stops executing until the user clicks. On a website, it might be ok. With a web app, it often becomes a problem. 4. Sandbox: don't force the user to reboot his computer Remember the crappy websites which show you an infinite number of message boxes ? The only way for the users without enough technical background to be able to continue working was in fact to reboot their computer. This brings us to a problem: alert boxes are out of the scope of the website or web application. You are not authorized to prevent the user from accessing other tabs of the browser³. The same problem forced the browsers to solve it in different ways. Firefox, for example, permits the access to other tabs when displaying an alert on your tab. Chrome, on the other hand, allows you to check that you don't want to get any alert boxes from a page any-longer, but still blocks the access to other tabs. While Firefox approach is perfectly valid, Chrome's one can be criticized (since it still blocks every tab), and causes a problem: what if the user was severely annoyed by several message boxes issued by your app and checked the case, and then, you tried to show something really important? Right, the user will never see it. The fact remains the same, most users will be annoyed by alert boxes, so they are still not very user friendly, and may severely block a user with no enough technical background. Inline, JavaScript messages may block the page, but not the browser itself. Since web app model is a sort of sandboxing, where you can't for example access the user keyboard or reboot the computer or read files from hard disk or go full-screen or use two monitors, alert boxes with their blocking effect break severely this sandboxing model . Last but not least, what if the user was on another tab when your application decided to show the alert box? What if the user was doing something important, and does not want to interact with your app right now? ¹ About Face 3, The Essentials of Interaction Design , Alan Cooper, Robert Reimann and David Cronin, ISBN 978-0-470-08411-3; Chapter 25: Errors, Alerts, and Confirmations. ² This is just an example. Please, don't do it in your web applications, since it's really a poor design choice. ³ If you want a comparison with the world of desktop apps, an inline JavaScript message is like a message box of a desktop application. An alert box in a browser, on the other hand, is like a window appearing from nowhere, set as topmost, on a full-screen opaque background which blocks you from accessing any other desktop application. Any app which will decide to do it once on my computer will be removed immediately, and forever.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106031", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19937/" ] }
106,065
We say that we indent code. I'm writing a string builder which can add/remove tabs to indent code. Sample: builder.Add("<ul>"); builder.Indent(); builder.Add("<li></li>"); builder.Dedent(); // <-- what should this be named. builder.Add("</ul>"); What should I name the method?
Out is the opposite of in, everybody knows this. Why not use Indent() and Outdent() ?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106065", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12629/" ] }
106,095
I've spent a lot of time reading different books about "good design", "design patterns", etc. I'm a big fan of the SOLID approach and every time I need to write a simple piece of code, I think about the future. So, if implementing a new feature or a bug fix requires just adding three lines of code like this: if(xxx) { doSomething(); } It doesn't mean I'll do it this way. If I feel like this piece of code is likely to become larger in the nearest future, I'll think of adding abstractions, moving this functionality somewhere else and so on. The goal I'm pursuing is keeping average complexity the same as it was before my changes. I believe, that from the code standpoint, it's quite a good idea - my code is never long enough, and it's quite easy to understand the meanings for different entities, like classes, methods, and relations between classes and objects. The problem is, it takes too much time, and I often feel like it would be better if I just implemented that feature "as is". It's just about "three lines of code" vs. "new interface + two classes to implement that interface". From a product standpoint (when we're talking about the result ), the things I do are quite senseless. I know that if we're going to work on the next version, having good code is really great. But on the other side, the time you've spent to make your code "good" may have been spent for implementing a couple of useful features. I often feel very unsatisfied with my results - good code that only can do A is worse than bad code that can do A, B, C, and D. Is this approach likely to result in a positive net gain for a software project, or is it a waste of time?
good code that only can do A is worse than bad code that can do A, B, C, D. This smells to me like speculative generality . Without knowing (or at least being reasonably sure) that your clients are gonna need features B, C and D, you are just unnecessarily overcomplicating your design. More complex code is harder to understand and maintain in the long run. The extra complexity is justified only by useful extra features. But we are typically very bad at predicting the future. Most of the features we think may be needed in the future will never ever be requested in real life. So: Good code that can only do A (but it is doing that one thing simply and cleanly) is BETTER than bad code that can do A, B, C, D (some of which might be needed sometime in the future).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106095", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31856/" ] }
106,202
I have been specifically asked to give line by line (or as appropriate - for example, image by image, etc.) explanation or commentary which my boss wants to be able to read and follow. Since he is not a programmer, he can not follow the code so wants it all translated into English. Has anyone been asked to do this before? I have commented on all of the source code and used JSDoc to generate full documentation of all functions, variables, etc... and included an implementation example, and full working demos with comments throughout. Is there anything else I can do to comment the code for non-programmers? This isn't a reasonable request, is it? UPDATE In the end, I managed to explain why it was not a good use of time to do what he was asking. He is a reasonable guy, and just did not have an understanding of what my job involves. Once he saw this post, I think he quickly understood that it was not a normal request. I did provide documentation that is suitable for another programmer to follow (JSDoc and inline comments - as well as some extra notes on technical issues), and a very broad flow chart diagram of the main logic of the program for my boss to follow. In the end, all parties were satisfied and we have moved on.
No , it is not a reasonable request! TALK HIM OUT OF IT , or have someone else talk him out of it, by all means. That is an irrational idea, which although doable is so expensive to do it should never actually be done. An overview of functions and subroutines is reasonable, but to "explain" every code line is not. It would be more effective for him to learn to read the language in hand, than to do that. The next thing he will be asking for is to translate mathematical formulas, or whatnot into English text. Although certainly possible that introduces much room for error and misinterpretation , and should never be done. Just like "translating" code to English.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106202", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20893/" ] }
106,362
I had a discussion with a coworker today, whether usage of using the Java operator instanceof is a kind of reflection. And the discussion quickly evolved into what actually defines reflection. So, what is the definition of reflection? And is the usage of instanceof considered "using reflection" ? And in addition, if instanceof is considered reflection, then is polymorphism not also "using reflection"? If not, what is the difference?
This is the definition of reflection according to wikipedia: In computer science, reflection is the process by which a computer program can observe (do type introspection) and modify its own structure and behavior at runtime. I couldn't have said it better myself and highlighted the important part for your question. That said, yes, instanceof is considered using reflection. The program observes its structure and conducts type introspection .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106362", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3666/" ] }
106,473
The dictionary defines artifact as: artefact, artifact [ˈɑːtɪˌfækt] n something made or given shape by man, such as a tool or a work of art, esp an object of archaeological interest anything man-made, such as a spurious experimental result (Life Sciences & Allied Applications / Biology) Cytology a structure seen in tissue after death, fixation, staining, etc., that is not normally present in the living tissue The word artifact often appears in software development, software development cycles, effort estimation, etc. But the above definition doesn't make sense to me in that context. Could someone please explain this word by giving some specific examples from software industry?
In software development life cycle (SDLC), artifact usually refers to "things" that are produced by people involved in the process. Examples would be design documents, data models, workflow diagrams, test matrices and plans, setup scripts, ... like an archaeological site, any thing that is created could be an artifact. In most software development cycles, there's usually a list of specific required artifacts that someone must produce and put on a shared drive or document repository for other people to view and share.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106473", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11216/" ] }
106,501
I feel weird when I'm editing code in an IDE that does not have line numbers in the text editor. The questions I have are: Are line numbers visually excessive, particularly when a find by line function exists in your IDE of choice? What are the uses of showing line numbers?
Displayed line numbers are essential for paired-programming. There is no faster way to direct your pair's eyes to the code you are thinking about. By extension, line-numbers are also extremely useful for code-reviews, both formal and informal.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106501", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6827/" ] }
106,576
When asking an interview candidate to write a program on the whiteboard, do you expect the candidate to write code that is syntactically correct? I had two candidates, one of which wrote a syntactically correct program but the logic was not up to the mark, and the other had the logic better written but the syntax was crap. I favor the first candidate.
I would favor the person who was able to reason through the problem, come up with a good solution, and then explain their solution to me. Even if their logic wasn't 100%, if they were on the right track and were reasoning through the problem, asking the right questions, and going down the right path, that would be my winner. When you are developing code on the job, you have many tools - IDEs, compilers, static analysis, unit tests, integration test, and acceptance test procedures - to find mistakes in syntax and logic. If you're writing on a white board, you don't have these tools and you're bound to make mistakes in syntax (forgetting a method name, a semicolon, a brace), and I can forgive that. My only question to you: Why are you having your candidates write actual code on the whiteboard, instead of focusing on algorithms, design strategies, and logical thinking? Programming languages change, problem solving doesn't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106576", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35719/" ] }
106,597
There are advantages in holding daily scrum, like: Team get coordinated with each other Everyone knows what amount of task has been done Burndown chart gets more and more complete Task board is updated It doesn't last that much, 15 minutes won't kill anybody However, recently (after 6 months of implementing and using scrum), I feel like that our developers don't like scrum daily that much anymore. People just update the task board, without explaining enough and it seems that they're bored of it. I see that when for any reason, we don't hold it, they kind'of become extra-happy. I just don't know what could be wrong with this. Are there any reasons mentioned somewhere for disadvantages the "daily scrum" can have for a team? What could be the reasons for developers getting tired of daily scrum?
I had experience participating in a "SCRUM" team with several employers. It appears to me that the managers take out the "daily scrum meeting" as the main point of SCRUM, and set it as the goal, instead of having it for what it is: a mean to achieve more effective development cycle . Very quickly the 15 minutes meetings became 45 minutes meetings, the updates were ineffective because people would be busy yawning and thinking "when can we go already" instead of listening to others, and it would also break people's routines (I, for example, am an owl person, and getting to work at 9AM for this stupid meeting every day is a good enough reason for me to quit the job). When managers take some idea which may be good if applied correctly, and take it to the extreme - they get the exact opposite of the results they expected. I personally think that the more meetings I participate in - the less work I'm doing. I have 2 regular meetings a week in my calendar and I usually skip one of them. Meetings are for managers, leave the developers to do their jobs. I'm sure there will be plenty of SCRUM enthusiasts that will say "But it's so wonderful" - well, save it, I've heard it all.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106597", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
106,601
I have to explain to some students the use of abstract classes and interfaces, as I have a very tecnical background, I would like to know if you would help me to define an easy explanation for junior. Simple definition: What are the purposes of abstract classes, interfaces, and what is the difference between the two ? When is it appropriate to use one instead of the other? Thanks.
I'll see if I can do this with generic terminology without too much hand-waving. An interface is like a contract. It says that a class which implements the interface agrees to implement all of the functions declared (as signatures only; no function definition) by that interface. The class may do so in any way it chooses, and provide any other functionality, as long as it implements each one of the declared functions. An interface is useful when you want to be able to use some common functionality of otherwise unrelated classes- they share no implementation details, only the function signatures. In C#, function declarations within an interface are implicitly pure virtual. An abstract class is a partially defined class that cannot be instantiated . It (usually) includes some implementation, but leaves some functions as pure virtual- declared only by their signature. Pure virtual functions are not defined in the class that declares them, so they must be implemented by a subclass (unless it too is an abstract class). Only a subclass which defines all of the pure virtual functions can be instantiated. The purpose of an abstract class is to define some common behavior that can be inherited by multiple subclasses, without implementing the entire class. In C#, the abstract keyword designates both an abstract class and a pure virtual method. In practical terms, the difference between the two is that an interface defines only pure virtual functions, while an abstract class may also include concrete functions, members, or any other aspect of a class.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106601", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30714/" ] }
106,815
What is the difference between idiom and design-pattern? It seems that these terminologies overlap somewhere; where exactly, I don't know. Are they interchangeable? When should I use what? Here is a list of C++ Idioms. Can I call them design patterns? Wikipedia defines, Programming Idiom as a low-level Design Pattern What does it mean? What does "low-level" mean here? This question is inspired from another question : https://stackoverflow.com/questions/7343531/are-the-some-design-patterns-language-dependent
An idiom is an idea to work around the quirks of a language. Some examples that come to mind are any of the C++ idioms you linked in the original question. They solve a common problem in that language in a canned way. A design pattern is similar, in that it solves a common problem. But the ideal design pattern is based on common language features, and thus is language agnostic. There is a continuum between idioms and design patterns, though, just as there is from low-level to high-level languages. The Visitor pattern is a good example; if there were only one language that only supported single dynamic-dispatch, then we might consider the Visitor pattern an idiom of that language. But there are whole hordes of languages that don't directly support multiple-dispatch. Hence, the Visitor pattern was born. The Observer pattern also comes to mind - C# directly supports it, so it doesn't need the common work-around form of the pattern. An example going the other direction is OO features (inheritance, polymorphism, etc). C doesn't directly support them. If more languages were like C, then we might develop design patterns to implement v-tables, type-safety, etc. Since plenty of languages support those feature, we'd call any common solution in C an idiom, rather than calling the generalized solution a design pattern.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106815", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9020/" ] }
106,884
Most of the logic for my webservice involves talking to our supplier's webservices (checking availability, ordering etc.) They don't have a test environment and the majority of calls can't be run arbitrarily (for example a cease would be run once and would actually stop a service). Is it feasible to run unit tests in this environment? I could simulate typical responses but I'm worried that hardcoding supplier responses would undermine the point of unit tests.
No, it won't. The point of unit tests is precisely to test your code in isolation , independent of the external world. Testing your whole system interacting with external parties like web services etc. is integration/system testing . This is also needed in most to all real world projects, but it is a different level than unit tests. Actually it sounds like in your situation, since you have difficulties in integration testing, you need unit tests even more than usual . As a long term goal, you may consider educating and/or pestering said suppliers to set up a test environment for themselves and their clients. You may need to invoke your management's support for this to succeed though, so get prepared with hard facts and figures to convince them about the business value of a test environment.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106884", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29270/" ] }
106,903
A few years ago I immigrated to another country and early last year I managed to get my master's degree. Back then I was desperately looking for a job and I was fortunate to get offered a job in a small software development company. For the record, back in my home company, I was a well-known developer with a very good track record and I used to work as a senior developer and team leader for a software development company. Now after more than 1 year, for both technical and non-technical reasons I want to leave my current job. I can tolerate some of the technical problems we have but our manager and team leader are constantly disrespecting me and communicate with me in an offensive and belittling manner and for times and times this has made me feel depressed, anxious, stressed, and what not. Now my question is, how should a person in my situation, respond to "why are you leaving your current job?". Obviously I cannot say "because my manager is a psycho who is disrespecting me and offending me all the time and I have had enough". How do you think I should respond to questions like this?
You could say that your current job is not challenging enough and you feel that you could do more than your job allows. Therefore, you are looking for new oportunities.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106903", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36301/" ] }
106,909
Possible Duplicate: When do you not give help to less experienced programmers? Currently, I am finding a lot of my day is taken up by people asking domain knowledge/system questions. This impacts personal productivity. Should developers learn to say "no" more to ensure they get asked less questions and be more productive, or should the developers help each other out? Stuff does get documented, but it is often easier to ask a "quick question". How do you handle this sort of scenario?
I've been in this situation before. First of all, don't say no . Never refuse to provide people with answers to their questions. Instead try to guide them into an "appropriate" way of getting those answers. This is what worked for me: If you're not the right person for the question, redirect them instead of answering yourself. This prevents you becoming the "go-to" guy on all questions just because you always answer them. All non-urgent questions must be asked via mail. Refuse to answer people at your desk and tell them to send you an email. When questioned on this policy, explain that this helps you organize your day and improve the quality of the answers to questions. Ensure that documents or wiki entries exist for questions that are asked more than once. When someone asks you a question whose answer is in a document, mail them the document while they're standing right there at your desk. This dovetails nicely with the questions-by-mail policy because you can just forward a previous answer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106909", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36303/" ] }
106,980
I intend on hiring 2-3 junior programmers right out of college. Aside from cash, what is the most important perk for a young programmer? Is it games at work? I want to be creative... I want some good ideas
In my experience, good programmers want to program with as few distractions as possible. Some of these are more relevant to big companies, and I'm not sure where you work, but here are some examples: Casual dress code : Young programmers in particular will have a tough time avoiding resentment of a strict dress code. "I'm just going to sit at my desk all day--why do I need to wear slacks/polos/other uncomfortable business clothes?" In my opinion, this is half rebellion and half honest productivity-seeking: It really is much easier to program in jeans and a t-shirt than slacks and a formal button-down. The question you probably need to ask yourself is if the potential productivity gain and morale boost is worth the potential loss of "professional" atmosphere. It all depends on your situation... there are startups and Fortune 500 companies out there which allow jeans & t-shirts. Few meetings : Almost nothing is more distracting than a constant stream of meetings. Try to avoid team-wide "status meetings" that could be carried out via individual e-mails or conversations. Programmers like it when their employer lets them program. Experienced coworkers : Good programmers want to improve. If any of your other employees have contributed to big open source projects, or have worked individually on some particularly successful internal projects, let your prospectives know! Private offices : This is rarely practical anywhere but venture-capitalized startups, but if you can offer candidates their own offices, they'll leave the interview with hearts in their eyes. Programming is so much easier when you aren't distracted by foot traffic and people singing happy-birthday one cube over. Cool stuff : If you can afford it, subsidize games for lunch breaks and post-work hang out sessions. Best practices : This will ensnare good programmers and intimidate less experienced ones: Show that your candidates will be working with reliable, sane version control, and that there are coding standards about unit tests or inheritance or anything. Organization is important. Don't nickel-and-dime : If you can be flexible with hours, do it! No one likes having to clock out every time they go to the restroom; it feels like you're not being valued as an employee. Dual monitors : Instant win for almost any programmer who's worked with dual monitors before.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/106980", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
107,288
Could spending time (and actively participating) on Programmers.SE and Stack Overflow help me improve my programming skills any close to what spending time on reading a book like Code Complete 2 (which would otherwise be next in my reading list) will help. Ok, may be the answer to this question for someone who is beginning with programming might be a straight no, but I'd like to add that this question I'm asking in context when the person is familiar with programming languages but wants to improve his programming skills. I was reading this question on SO and also this book has been recommended by many others (including Jeff and Joel ). To be more specific, I'd also add that even though I do programming in C, Java, Python,etc but still I'm not happy with my coding skills and reading the review of CC2 I realized I still need to improve a lot. So, basically I want to know what's the best way for me to improve programming skills - spend more time on here/SO or continue with CC2 and may be come here as and when time permits.
No, it is not a substitute, but a perfect complement . I feel a combination of the two holds a lot of power. Why is it that a good lecture teaches you more than just reading a book? Interaction and the ability to ask questions. By just reading a book, some questions might pop up to which you can't find any answers. Look for those questions here, or ask them if they haven't been discussed before.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107288", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14872/" ] }
107,326
A lot of answers on here suggest linking to your Stack Overflow account on your CV. I only have 300 some points over on Stack Overflow, so I don't think I'll be mentioning it just yet. Still, I'm curious to know how much reputation I should get before "boasting" about it on my CV.
In some post on careers.SE Joel pointed out (edit: See @Atul's answer), that actual rep is not so much what matters. What you should do, is to add links to answers (or questions) that you think really show your knowledge and skill. In fact, rep has two major problems: it follows some weird group dynamics. There's a lot of uneven amplification effects, that distort it even as a measure for the "absolute" value of an answer. using it as a measure for a whole person basically dumbs it down to measuring something very individual and complex on a linear scale. However your profile as a whole tells a story. One can see, whether you're a specialist, or a generalist. One can quickly see your top answers and questions, to see what you master and what intrigues you. Your style of writing reflects on your personality and so on. I guess the point where you should use your SE profile as part of your CV is the point when you like the story it tells.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107326", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35035/" ] }
107,338
Most projects I am involved with use several open-source components. As a general principle, is it a good idea always to avoid binding all components of the code to the third-party libraries and instead go via an encapsulating wrapper to avoid the pain of change? As an example, most of our PHP projects directly use log4php as a logging framework, i.e. they instantiate via \Logger::getLogger(), they use ->info() or ->warn() methods, etc. In the future, however, a hypothetical logging framework may appear which is better in some way. As it stands, all the projects which closely couple to the log4php method signatures would have to change, in dozens of places, in order to fit the new signatures. This would obviously have a wide impact on the codebase and any change is a potential problem. To future-proof new codebases from this kind of scenario, I often consider (and sometimes implement) a wrapper class to encapsulate the logging functionality and make it easier, though not foolproof, to alter the way in which logging works in future with minimal change; the code calls the wrapper, the wrapper passes the call to the logging framework du jour . Bearing in mind that there are more complicated examples with other libraries, am I over-engineering or is this a wise precaution in most cases? EDIT: More considerations - using dependency injection and test doubles practically requires that we abstract out most APIs anyway ("I want to check my code executes and updates its state, but not write a log comment/access a real database"). Isn't this a decider?
If you only use a small subset of the third party API, it makes sense to write a wrapper - this helps with encapsulation and information hiding, ensuring you don't expose a possibly huge API to your own code. It can also help with making sure that any functionality you don't want to use is "hidden". Another good reason for a wrapper is if you expect to change the third party library. If this is a piece of infrastructure you know you will not change, do not write a wrapper for it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107338", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10737/" ] }
107,361
Suppose I have a class (forgive the contrived example and the bad design of it): class MyProfit { public decimal GetNewYorkRevenue(); public decimal GetNewYorkExpenses(); public decimal GetNewYorkProfit(); public decimal GetMiamiRevenue(); public decimal GetMiamiExpenses(); public decimal GetMiamiProfit(); public bool BothCitiesProfitable(); } (Note the GetxxxRevenue() and GetxxxExpenses() methods have dependencies which are stubbed out) Now I'm unit testing BothCitiesProfitable() which depends on GetNewYorkProfit() and GetMiamiProfit(). Is it okay to stub GetNewYorkProfit() and GetMiamiProfit()? It seems like if I don't then I'm simultaneously testing GetNewYorkProfit() and GetMiamiProfit() along with BothCitiesProfitable(). I'd have to make sure I setup the stubbing for GetxxxRevenue() and GetxxxExpenses() so that the GetxxxProfit() methods return the correct values. So far I've only seen example of stubbing dependencies on external classes not internal methods. And if it is okay, is there a particular pattern I should use to do this? UPDATE I'm concerned that we might be missing the core issue and that is probably the fault of my poor example. The fundamental question is: if a method in a class has a dependency on another publicly exposed method in that same class is it okay (or even recommended) to stub that other method out? Maybe I am missing something, but I'm not sure splitting up the class always makes sense. Perhaps another minimally better example would be: class Person { public string FirstName() public string LastName() public string FullName() } where full name is defined as: public string FullName() { return FirstName() + " " + LastName(); } Is it okay to stub FirstName() and LastName() when testing FullName()?
You should break up the class under question. Each class should do some simple task. If you task is too complicated to test, then the task the class does is too big. Ignoring the goofiness of this design: class NewYork { decimal GetRevenue(); decimal GetExpenses(); decimal GetProfit(); } class Miami { decimal GetRevenue(); decimal GetExpenses(); decimal GetProfit(); } class MyProfit { MyProfit(NewYork new_york, Miami miami); boolean bothProfitable(); } UPDATE The problem with stubbing methods in a class is that you are violating the encapsulation. Your test should be checking to see whether or not the external behaviour of the object matches the specifications. Whatever happens inside the object is none of its business. The fact that FullName uses FirstName and LastName is an implementation detail. Nothing outside of the class should care that that is true. By mocking the public methods in order to test the object, you are making an assumption about that object is implemented. At some point in the future, that assumption may cease to be correct. Perhaps all of the name logic will be relocated to a Name object which Person simply calls. Perhaps FullName will directly access member variables first_name and last_name rather then calling FirstName and LastName. The second question is why you feel the need to do so. After all your person class could be tested something like: Person person = new Person("John", "Doe"); Test.AssertEquals(person.FullName(), "John Doe"); You shouldn't feel the need stub anything for this example. If you do then you are stub-happy and well... stop it! There is no benefit to mocking the methods there because you've got control over what is in them anyways. The only case where it would seem to make sense for the methods FullName uses to be mocked is if somehow FirstName() and LastName() were non-trivial operations. Maybe you are writing one of those random name generators, or FirstName and LastName query the database for answer, or something. But if that's what happening it suggest that object is doing something which doesn't belong in the Person class. Putting it another way, mocking the methods is taking the object and breaking into two pieces. One piece is being mocked while the other piece is being tested. What you are doing is essentially an ad-hoc breaking up of the object. If that's the case, just break up the object already. If your class is simple, you shouldn't feel the need to mock out pieces of it during a test. If your class is complex enough that you feel the need to mock, then you should break the class up into simpler pieces. UPDATE AGAIN The way I see it, an object has external and internal behavior. External behavior includes returns values calls into other objects etc. Obviously, anything in that category should be tested. (otherwise what would you test?) But internal behavior shouldn't really be tested. Now the internal behavior is tested, because it is what results in the external behavior. But I don't write tests directly on the internal behavior, only indirectly through the external behavior. If I want to test something, I figure it should be moved so that it becomes external behavior. That's why I think if you want to mock something, you should split the object so that the thing you want to mock is now in the external behavior of the objects in question. But, what difference does it make? If FirstName() and LastName() are members of another object does it really change the issue of FullName()? If we decide that it is neccessary to mock FirstName and LastName does is actually help for them to be on another object? I think if you use your mocking approach, then you create a seam in the object. You have functions like FirstName() and LastName() which directly communicate with an external data source. You also have FullName() which does not. But since they are all in the same class that is not apparent. Some pieces aren't supposed to directly access the data source and other are. Your code will be clearer if just break out those two groups. EDIT Let's take a step back and ask: why do we mock objects when we test? Make the tests run consistently (avoid accessing things which change from run to run) Avoid accessing expensive resources (don't hit third party services, etc.) Simplify the system under test Make it easier to test all possible scenarios (i.e. things like simulating failure, etc) Avoid depending on the details of other pieces of code so that changes in those other pieces of code won't break this test. Now, I think reasons 1-4 don't apply to this scenario. Mocking the external source when testing fullname takes care of all of those reasons for mocking. The only piece not handled that is the simplicity, but it seems the object is simple enough that's not a concern. I think your concern is reason number 5. The concern is that at some point in future changing the implementation of FirstName and LastName will break the test. In the future FirstName and LastName may get the names from a different location or source. But FullName will probably always be FirstName() + " " + LastName() . That's why you want to test FullName by mocking FirstName and LastName. What you have then, is some subset of the person object which is more likely to change then the others. The rest of the object uses this subset. That subset currently fetches its data using one source, but may fetch that data in a completely different way at a later date. But to me it sounds like that subset is a distinct object trying to get out. It seems to me that if you mock the object's method you are splitting the object up. But you are doing so in a an ad-hoc manner. Your code does not make it clear that there are two distinct pieces inside your Person object. So simply split that object in your actual code, so that it is clear from reading your code what is going on. Pick the actual split of the object that makes sense and don't try to split the object up differently for each test. I suspect you may object to splitting up your object, but why? EDIT I was wrong. You should split objects rather then introducing ad-hoc splits by mocking individual methods. However, I was overly focused on the one method of splitting objects. However, OO provides multiple methods of splitting an object. What I'd propose: class PersonBase { abstract sring FirstName(); abstract string LastName(); string FullName() { return FirstName() + " " + LastName(); } } class Person extends PersonBase { string FirstName(); string LastName(); } class FakePerson extends PersonBase { void setFirstName(string); void setLastName(string); string getFirstName(); string getLastName(); } Maybe that is what you were doing all along. But I don't think this method will have the problems I saw with mocking methods because we've clearly delineated which side each method is on. And by using inheritance, we avoid the awkwardness that would arise if we used an additional wrapper object. This does introduce some complexity, and for only a couple of utility functions I'd probably just test them by mocking the underlying 3rd party source. Sure, they are at increased danger of breaking but its not worth rearranging. If you've got a complex enough object that you need to split it, then I think something like this is a good idea.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107361", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16411/" ] }
107,368
IronScheme is mentioned on Wikipedia as a successor to a failed project called IronLisp, bringing Lisp to CLR and .NET, the way Clojure does for the JVM. Does anyone have experience with this language? It looks fairly complete (99%) but I'm not sure how to judge whether it's worth my time to fiddle with getting it set up or not. By stable or complete, I mean using it for actual projects rather than just fiddling with tools and Project Euler style problems.
I am the author of IronScheme. I am not really sure how to answer your question, but will try :) IronScheme firstly tries to implement Scheme (R6RS specifically), with the secondary objective being CLR interoperability. Compared to Clojure (focusing on the their bad points), IronScheme won't: give you CLR runtime exceptions; IronScheme uses Scheme's exception handling give you 'infinite' stacktraces; IronScheme is properly tail recursive be hard to setup; just extract to directory and go take long to start up; IronScheme (when ngen'd) only takes 0.1 seconds to start the REPL be ambiguous; IronScheme implements a standardized specification Unfortunately where Clojure wins is: Documentation Frameworks and libraries User community This is worrying for IronScheme, as the last 3 mentioned is very much a chicken-egg scenario. Personally, I tend to only create libraries when I need them, and with a very tiny user community, there is not much contribution from users besides bug reports. I would love a bigger user community. As for support, I normally help users as fast I can. This evidence can be seen from my response times on the IronScheme discussion boards. Also, bugs are normally fixed as soon as they have been identified. As for stability, the codebase is pretty mature, and currently only bug fixes and optimizations are the only code additions. As for usability, if you are familiar with the .NET framework, you can do pretty much anything with IronScheme as you can with any other .NET language; it may be harder or easier depending on how much you are willing to abstract into more Scheme-like idioms. Things are very easy to write in IronScheme; for example my entire MVC framework is barely 400 lines of Scheme code, thanks to tapping into ASP.NET (I certainly do not like re-inventing the wheel). Feel free to ask for clarifications if the answer is not enough. Demian does make good points in terms of maintainability too. Regards leppie
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107368", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
107,416
Newer systems such as OpenCL are being made so that we can run more and more code on our graphics processors, which makes sense, because we should be able to utilise as much of the power in our systems as possible. However, with all of these new systems, it seems as if GPUs are better than CPUs in every way . Because GPUs can do parallel calculation, multi-core GPUs actually seem like they'd be much better than multi-core CPUs; you'd be able to do many calculations at once and really improve speed. Are there still certain cases where serial processing is still better, faster, and/or more efficient than parallel?
However, with all of these new systems, it seems as if GPUs are better than CPUs in every way. This is a fundamental mis-understanding. Present GPU cores are still limited compared to current top-line CPUs. I think NVIDIA's Fermi architecture is the most powerful GPU currently available. It has only 32-bit registers for integer arithmetic, and less capability for branch prediction and speculative execution then a current commodity Intel processor. Intel i7 chips provide three levels of caching, Fermi cores only have two, and each cache on the Fermi is smaller than the corresponding cache on the i7. Interprocess communication between the GPU cores is fairly limited, and your calculations have to be stuctured to accommodate that limitation (the cores are ganged into blocks, and communication between cores in a block is relatively fast, but communication between blocks is slow). A significant limitation of current GPUs is that the cores all have to be running the same code. Unlike the cores in your CPU, you can't tell one GPU core to run your email client, and another core to run your web server. You give the GPU the function to invert a matrix, and all the cores run that function on different bits of data. The processors on the GPU live in an isolated world. They can control the display, but they have no access to the disk, the network, or the keyboard. Access to the GPU system has substantial overhead costs. The GPU has its own memory, so your calculations will be limited to the amount of memory on the GPU card. Transferring data between the GPU memory and main memory is relatively expensive. Pragmatically this means that there is no benefit in handing a handful of short calculations from the CPU to the GPU, because the setup and teardown costs will swamp the time required to do the calculation. The bottom line is that GPUs are useful when you have many (as in hundreds or thousands) of copies of a long calculation that can be calculated in parallel. Typical tasks for which this is common are scientific computing, video encoding, and image rendering. For an application like a text editor the only function where a GPU might be useful is in rendering the type on the screen.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107416", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36436/" ] }
107,450
Well, I basically understand how to use pointers, but not how best to use them in order to do better programming. What are good projects or problems to resolve involving the use of pointers so I can understand them better?
Manipulating large amounts of data in memory is where pointers really shine. Passing a large object by reference is equivalent to just passing along a plain old number. You can manipulate the needed parts directly as opposed to copying an object, altering it, then passing back the copy to be put in place of the original.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107450", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30815/" ] }
107,503
At a web startup, is it more common to have an engineer working the front-end AND back-end of the feature (basically in charge of the whole feature)? Or have engineers separated between the back-end and front-end? Which ones are more beneficial and for what situations? The downside, I've noticed, regarding having one engineer in charge of the whole feature is that the person might only be particularly strong in either frontend or backend development but not both, thus sometimes having a decrease in speed and quality. Having frontend and backend developers on one feature increase the speed of the feature and the quality AND it encourages collaboration. But I am concerned about having 2 engineers working on one feature which may be a poor use of resources since the 1 engineer can be placed on another feature to work on. What is the common/best practice for allocation backend/frontend engineering resources at a small early-stage startup? And then how will it change as it grows?
Here is my wisdom from 14 years experience: if you have a startup, don't assign roles. Better hope that you assembled a good self organizing team. If everybody knows each other, everybody knows who does what the best. A project manager will only stand in the way. later on, the distinction between front-end and back-end makes sense. In the backend, quality is prio one. Code has to be performant, secure and transaction safe. In the front-end, implementation time matters. And you must be able to rely on a good back-end. The different goals of front- and back-end don't work good together. the back-end should already exist before the front-end coder starts to work. Otherwise, the front-end coder will be slowed down too much. backend has to be able react fast on front-end requirements in order not to slow them down
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107503", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36465/" ] }
107,508
I will be mentoring a team of high school students for the FIRST Robotics Competition, most teams here develop[ their robot software using C++. For many of the students on the team this will be their first introduction to programming. I wouldn't have chosen C++ for teaching programming to high schoolers (e.g. Python or Javascript would have been easier I think) but the choice is set. I want to teach them proper C++ (i.e. avoid a mixed C/C++ dialect, i.e. C+) but I don't want to scare them either with needless complexity. For that matter: Should I start using STL from day one, esp. vector or just stick with standard arrays? Arrays are easier to introduce but the pointer errors may be harder to catch. For I/O, should I stick to cout , etc. or do you think printf would be easier to learn? Are there any online resources for C++ that are suitable to use for such young learners? Thanks! EDIT : Thanks for so many excellent answers. In addition to Accelerated C++ , which is suggested by many people, I have found that C++ For Everyone is an excellent text.
I think you should start with the data types that the language has built in like arrays and pointers, and when your students comprehend those, move on to classes and OO, then the STL. The reason is that you can teach people to understand arrays without understanding much else besides variables and the underlying computer architecture, but you can't teach them to understand vector without teaching them classes first. If you use the STL from the get go, your students will have to just live with not having a clue about how vector works exactly. And then when you get to that point, they won't have a good enough grasp of pointers and arrays and things that you get from doing stuff like writing your own vector class, writing your own linked list class, etc. that will be necessary to appreciate and exploit its features. It annoys me when students say "what's that?" and teachers say "just ignore it, you'll learn it later." And as Demian pointed out in the comments, deciphering the relatively cryptic messages you get from template errors is significantly harder than understanding the errors you may get from arrays/non-template constructs. I do not feel the same way about cout and printf . Neither is any lower-level than the other, except that cout uses operator overloading. This may seem stupid, but I am absolutely fanatic about having people understand the very basic building blocks of everything before moving on to the abstractions. You shouldn't use smart pointers until you're proficient with raw pointers, no vectors before arrays, that sort of thing. I often say this, but I'll say it again: it's better to teach students long division first and then let them use a calculator than to let them use a calculator then teach them long division afterwards. As for books to teach beginners, see the master list of good C++ books .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107508", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36469/" ] }
107,572
We have been working with a shopping cart for DotNetNuke, and have had endless problems with the developer's releases of their product. Every release fixes one thing but new bugs pop up elsewhere. I know that bugs are inevitable and that we cannot squash all of them at the time, but can someone please tell me what percentage of bugs should be stamped out before a product can be accepted as a stable release?
I don’t think it’s a matter of percentage. Each bug has to be evaluated on its own to decide whether it’s a show-stopper for the particular project in question, based on the likely cost of the bug if not fixed before release, and the likely cost of delaying release until the bug can be fixed. (For example, Stack Overflow spent quite a while with a notification icon that didn’t display properly in Chrome. The costs of that bug were so low that it could happily be left for several weeks, as the team had bigger issues to focus on.) And remember that “bugs” is just shorthand for “known bugs”. You could fix 100% of your known bugs and still not be in shape for launch, because you haven’t tested well enough.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107572", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36497/" ] }
107,669
Consider a parameterless ( edit: not necessarily) function that performs a single line of code, and is called only once in the program (though it is not impossible that it'll be needed again in the future). It could perform a query, check some values, do something involving regex... anything obscure or "hacky". The rationale behind this would be to avoid hardly-readable evaluations: if (getCondition()) { // do stuff } where getCondition() is the one-line function. My question is simply: is this a good practice? It seems alright to me but I don't know about the long term...
Depends on that one line. If the line is readable and concise by itself, the function may not be needed. Simplistic example: void printNewLine() { System.out.println(); } OTOH, if the function gives a good name to a line of code containing e.g. a complex, hard to read expression, it is perfectly justified (to me). Contrived example (broken into multiple lines for readability here): boolean isTaxPayerEligibleForTaxRefund() { return taxPayer.isFemale() && (taxPayer.getNumberOfChildren() > 2 || (taxPayer.getAge() > 50 && taxPayer.getEmployer().isNonProfit())); }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107669", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24071/" ] }
107,687
I've been messing around with functional programming languages for a few years, and I keep encountering this phrase. For example, it is a chapter of "The Little Schemer, which certainly predates the blog by this name. (No, the chapter doesn't help answer my question.) I understand what lambda means, the idea of an anonymous function is both simple and powerful, but I fail to understand what "the ultimate" means in this context. Places that I've seen this phrase: The title of chapter 8 of The Little Schemer A blog: http://lambda-the-ultimate.org/ A series of "Lambda the ultimate X" papers: http://library.readscheme.org/page1.html ( archived ) I feel like I'm missing a reference here, can anyone help?
Yes, it's simply a recurring phrase in the title of several papers, starting from a couple in the 70s, in which Sussman and Steele demonstrate the use of lambda calculus for programming, by means of a minimalist Lisp dialect named " Scheme " they devised for the purpose. You can find the papers themselves here ; they're interesting and surprisingly relevant. I'm not sure if this is ever explicitly stated, but it's clear (from context, having read the papers, and knowing the general background and research interests of the authors) that the phrase is simply a catchy slogan for their contention that lambda abstractions, as a computational primitive, are not only universal in the formal sense (of being able to encode any program in some fashion, however awkward), but universal in a practical sense that any and every construct present in other languages, even those that are baked-in from the ground up, can be reimplemented in a lambda-based language in a way that is both effective and natural to use. The repeated phrase leads to the obvious generalized form "for all X, lambda is the ultimate X", which is the sense I've generally taken "Lambda the Ultimate" to mean as the blog name, noting that LtU is concerned with programming language design and theory. Ironically, LtU would probably also be one of the best places to find someone who could tell you about something for which lambda is not the ultimate implementation. :] Note also that Sussman is one of the authors of SICP , a very influential textbook that also uses the Scheme language and spends a fair amount of time introducing lambda abstractions as a concept.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107687", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ] }
107,723
I just discovered some lovely code in our companies app that uses Try-Catch blocks as logical operators. Meaning, "do some code, if that throws this error, do this code, but if that throws this error do this 3rd thing instead". It uses "Finally" as the "else" statement it appears. I know that this is wrong inherently, but before I go picking a fight I was hoping for some well thought out arguments. And hey, if you have arguments FOR the use of Try-Catch in this manner, please do tell. For any who are wondering, the language is C# and the code in question is about 30+ lines and is looking for specific exceptions, it is not handling ALL exceptions.
Exception handling tends to be an expensive way to handle flow control (certainly for C# and Java). The runtime does quite a lot of work when an exception object is constructed - getting the stack trace together, figuring out where the exception is handled and more. All this costs in memory and CPU resources that do not need to be expanded if flow control statements are used for flow control. Additionally, there is a semantic issue. Exceptions are for exceptional situations, not for normal flow control. One should use exception handling for handling unanticipated/exceptional situations, not as normal program flow, because otherwise, an uncaught exception will tell you much less. Apart from these two, there is the matter of others reading the code. Using exceptions in such a manner is not something most programmers will expect, so readability and how understandable your code is suffer. When one sees "Exception", one thinks - something bad happened, something that is not supposed to happen normally. So, using exceptions in this manner is just plain confusing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107723", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7274/" ] }
107,800
In another question , I asked about why developers might don't like daily scrum . We talked to developers and we decided to not hold daily scrum for a while (to give it a try and customized scrum in our first attempt). This is the output of consulting with developers directly. On the other hand, we don't want to lose good parts of daily scrum, like getting a chance to coordinate developers everyday, or watching the work progress like a Key Performance Indicator, to take actions early. As an alternative to daily scrum, we're thinking about asking developers to provide daily reports with the following conditions: No need to follow any specific format. Each and every format is accepted. Even if the work is not done, we want to hear the amount of progress. There is no need to mention the time spent on each task. Development obstacles and coordination requirements should be mentioned. There is no need to be obsessed with daily reports. It's not taken that strict. Do you think that this can decrease their productivity? Have you had any daily report experience? Do you have any suggestion for us, so that we can get sure that we're not micromanaging ?
As an alternative to daily scrum, we're thinking about asking developers to provide daily reports with the following conditions: What a terrible idea. Do you think that this can decrease their productivity? Yes. Why? A verbal presentation at a meeting combines writing and n people "reading" the report into one concurrent activity. Talking plus Listening. Over and done with. Questions answered right away. Writing a report is a waste of time because there will be questions and you'll have to review the report with folks who (a) have questions and (b) didn't really read it. Daily reports, won't get read. They rapidly devolve to in-box-noise. "There is no need to be obsessed with daily reports". In which case, why do them? Do you have any suggestion for us, so that we can get sure that we're not micromanaging? Yes. Have a daily stand-up. It takes a few minutes and you're done. If your daily stand-up takes more than a few (15?) minutes, you're sharing way too much detail and need to schedule separate meetings for those details. Daily stand-ups are easy to do. After a 2-minute summary, everything else is probably details, not for the whole team, and needs to be pushed into a follow-up meeting. The meeting moves on to the next person's focus for the day.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107800", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
107,841
Almost every article I can find about recursion includes the examples of factorial or Fibonacci Numbers, which are: Math Useless in real life Are there some interesting non-math code examples to teach recursion? I'm thinking divide-and-conquer algorithms but they usually involve complex data structures.
Directory / File structures are the best example of a use for recursion, because everyone understands them before you start, but anything involving tree-like structures will do. void GetAllFilePaths(Directory dir, List<string> paths) { foreach(File file in dir.Files) { paths.Add(file.Path); } foreach(Directory subdir in dir.Directories) { GetAllFilePaths(subdir, paths) } } List<string> GetAllFilePaths(Directory dir) { List<string> paths = new List<string>(); GetAllFilePaths(dir, paths); return paths; }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107841", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25962/" ] }
107,883
AGPL is a fairly new license that was meant to go GPL-over-networks. However, not being a lawyer, and actually not having read the whole license, I can't understand what exactly you can do freely and what not with AGPL. My uncertainty is fed by this post about MongoDB (which is AGPL) and even more by the comments below. If we follow the comments it turns out that you can use AGPL libraries with your closed-source, commercial server-side software, as long as you don't modify the library. Is that the case? Or you have to distribute your entire application when you use an AGPL licensed library? The case with MongoDB is that it uses Apache license for the client code, which poses another question. What happens if you use AGPL software, but deploy it as a different application that your closed-source commercial one? For example, take iText - it is an AGPL library: if you use it and modify it, do you have to open-source your entire application or you have to redistribute only the changes in iText? if you use it and don't modify it, do you have to open-source your entire application? If you wrap iText in another application that you start as a separate process, but use it from your main application, should you open-source everything, or just the wrapper application? (The wrapper application will be HTTP-based API that will take pdf files and will return the results of using iText as JSON). Can this be used to circumvent the AGPL license? Note: The question is about AGPLv3
The AGPL is based on the GPL, not the LGPL. It does not contain any linking exceptions, and any work using AGPL code (linked or otherwise, modified or not) must also be AGPL licensed and distributed. Using separate processes can circumvent the (A)GPL, but this is murky ground. If your end application depends on the external process, such that it wouldn't function properly without it, then it would be considered a derived work of the AGPL software. In most cases where people use separate GPL applications in closed source programs, they provide the GPL work as an optional extension, or an alternative back-end to some other piece of code etc. The (A)GPL work cannot be distributed alongside the final application even as a separate app (eg, putting them into the same archive or repository), although it's fine to provide instructions on where to find the GPL work and how to use it with your app.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107883", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36596/" ] }
107,884
Till recently my development workflow was the following: Get the feature from product owner Make a branch (if feature is more than 1 day) Implement it in a branch Merge changes from main branch to my branch (to reduce conflicts during backward merging) Merge my branch back to main branch Sometimes there were problems with merging, but in general I liked it. But recently I see more and more followers of idea to not make branches as it makes more difficult to practice continuous integration, continuous delivery, etc. And it sounds especially funny from people with distributed VCS background who were talking so much about great merging implementations of Git, Mercurial, etc. So the question is should we use branches nowadays?
Unless you are all working out of the same working tree, you are using branches, whether you call them that or not. Every time a developer checks out into his working tree, he creates a separate local branch of development, and every time he checks in he does a merge. For most teams, the question isn't if you use branches, the questions are how many and for what purpose ? The only way to do truly "continuous" integration is for everyone to work out of the same working tree. That way, you immediately know if your changes adversely impact someone else's. Obviously, that's untenable. You need a certain degree of isolation in a branch in order to accomplish anything, even if that "branch" is just your local working directory. What's needed is a proper balance of integration and isolation. In my experience, using more branches improves the degree of integration, because the integration is done with precisely the people it needs to be done, and everyone else can more easily isolate non-related problems as required. For example, I spent the last day tracking down three recently introduced integration-related bugs in our build that were blocking my "real" work. Having done my due diligence in reporting these bugs to the people who need to fix them, am I now just supposed to wait until they are finished to continue my work? Of course not. I created a temporary local branch that reverts those changes so I can have a stable baseline to work against while still receiving the latest changes from upstream. Without the ability to make a new branch for that purpose, I would be reduced to one of three options: either revert the changes in the central repo, manually maintain the patches that revert them in my working tree and try not to accidentally check them in, or back out to a version before those bugs were introduced. The first option is likely to break some other dependency. The second option is a lot of work, so most people choose the third option, which essentially prevents you from doing more integration work until the previously found bugs are fixed. My example used a private local branch, but the same principle applies to shared branches. If I share my branch, then maybe 5 other people are able to continue on with their primary tasks instead of performing redundant integration work, thus in aggregate more useful integration work is performed. The issue with branching and continuous integration isn't how many branches you have, it's how frequently you merge them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107884", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7369/" ] }
107,889
When should you put Key/Value type of data in it's own class instead of using a pre-built generic structure, such as a KeyValuePair or a Tuple ? For example, most ComboBoxes I create contain a DisplayName and a Value. This is the kind of data I am trying to decide when to put in a new class, and when to just use a KeyValuePair. I am currently working on something that uses iCalendar , and the selected user's data ultimately gets combined into a key1=value1;key2=value2; type of string. I started out by putting the data in a KeyValuePair<string,string> , but now I am wondering if that should be it's own class instead. Overall, I am interested in finding out what guidelines are used when deciding to use an existing structure/class like a KeyValuePair over a 2-property object, and in what kind of situations you would use one over another.
I'd generally use an object rather than a KeyValuePair or Tuple in most cases. First, when you come in 6 months later to make changes, it is alot easier to figure out what your intent was earlier rather than wondering what Tuple t is and why it has those funny values. Second, as things grow and change you can easily give your simple data transfer objects behavior as required. Need to have two name formats? Easy, just add appropriate ToString() overloads. Need it to implement some interface? No problem. Finally, there really is nearly zero overhead to creating a simple object, especially with automatic properties and code completion. Bonus Protip: if you want to keep these objects from polluting your namespace, making private classes inside of classes is a great way to keep things under wraps and prevent strange dependencies.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107889", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1130/" ] }
107,917
In your experience, how long should a Sprint Planning meeting (Scrum) last? 8 hours? Or should it be shorter (succinct) and further discussions should be planned as part of the sprint? Our Sprints are 10 days long.
According to the Scrum Guide : The Sprint Planning Meeting is time-boxed to eight hours for a one-month Sprint. For shorter Sprints, the event is proportionately shorter. For example, two-week Sprints have four-hour Sprint Planning Meetings. That generally works for me.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107917", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32892/" ] }
107,976
What is Big and O in Big O notation? I've read the definitions and it doesn't tell what is O pronounced as 'oh'. For example - I understand that O (n) is complexity of a linear algorithm where n could be the number of operations. but what is an O ?
Well, my guess would be order, which coincides with wikipedia . Edit: (my own (any improvements appreciated)) translation from the German wikipedia article The capital letter O (actually a capital omicron at the time) as a symbol for the order of (German: "Ordnung von") was first used by the German number theorist Paul Bachman in the second issue of his book on analytic number theory appeared in 1894. The notation gained popularity due to the work of Edmund Landau, another German number theorist, whom this nomenclature is widely associated with today, especially in the German terminology.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/107976", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36620/" ] }
108,019
There is a new developer in our team. An agile methodology is in use at our company. But the developer has another experience: he considers that particular parts of the code must be assigned to particular developers. So if one developer had created a program procedure or module it would be considered normal that all changes of the procedure/module would be made by him only. On the plus side, supposedly with the proposed approach we save common development time, because each developer knows his part of the code well and makes fixes fast. The downside is that developers don't know the system entirely. Do you think the approach will work well for a medium size system (development of a social network site)?
It's an awful idea . It may be quicker in the short term, but it encourages badly documented hard-to-understand code as only the coder who wrote it is responsible for maintaining it. When someone leaves the company or goes on holiday the whole plan becomes mucked up. It also makes it very hard to allocate workloads; what happens when two urgent bugs come up with the code one coder "owns"? You should code as a team . People will naturally be allocated tasks and focus on certain areas but sharing workload and working together should be encouraged, not discouraged.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108019", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26264/" ] }
108,086
My friend has 15 years of programming experience and Ph.D. in mathematics. He has also cerebral palsy with speech impairment. Because of his handicap, he chose to being a software developer after his Ph.D. As far as I can see, he is still an excellent c# developer. Nowadays, however, he has hard time to find a job for himself because most of developer jobs require good communication skills. Looking at him struggling so much, do I have to advise him software industry is not suitable for him any more? It will be extremely difficult for me to do that to the friend but I think it would be better than making him wasting his time. What do you think? Update: Thanks a lot for your excellent answers. I can see most answers recommend against my advice and I really really hope you guys are right. In reality, however, he has been rejected in 100 or so phone interviews. That's where I want to be a potential bad adviser rather than a politically right friend.
What do you think? I think that each team can take a person with speech impairment with a positive net effect. I would consider incapability to handle one such person in a team as either a management or an ethical failure. Just take a look around. Both PhD and college graduates, good or mediocre programmers have problems with communicating their thoughts. But a lot of what we call "communication problems" arise from the inefficiency of development processes. If a person has to "communicate" (i.e. "chat") with his peers a lot to make the development of the softwarerocess progress without failures, then there's something wrong with the process. Efficient communication is not an eloquent blah-blah-blah or an ability to make a small talk easily. It's conveying information to others in a precise and concise manner. How well a person pronounces words is not the most important part in it; more important is how well he or she thinks. Moreover, I'm sure that a person with a speech impairment knows the price of each word, and wouldn't toss up a lot of them. Isn't it what they call "communication skills"? No, I can't actually give you any advice (edited). Personally, I would prefer a badly speaking peer to a less clever one, because I'm sure that an effort into setting up a way to communicate with less words would pay the team back. But I never had to make a hiring decision in my life, and the world may be just unnecessarily harsh sometimes...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108086", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32800/" ] }
108,124
It's now about 4 years of development that I'm using, hearing, talking about, and implementing hash tables and hash functions. But I really never understand why it's called hash? I remember the first days I started programming, this term was kind'of cumbersome terminology to me. I never figured out what is it, based on its name . I just experimentally understood what it does and why and when should we use it . However, I still sometimes try to figure out why it's called hash . I have no problem with table or function and to be honest, they are pretty deductive, rational terms. However, I think better words could be used instead of hash, like key , or uniqueness . Don't key table or uniqueness table . According to my dictionary, hash means: Fried dish of potato and meats (highly irrelevant) # symbol (AKA number sign, pound sign, etc.) (still irrelevant, maybe just a mis-nomenclature) Apply algorithm to character string (still has nothing to do with uniqueness , which is the most important feature of a hash table) Cut food Another term for hashish Does anyone know why it's called hash?
According to wikipedia, it refers to the hash function . If you want to go a step further, the wiki page for hash function says that the use of the word "hash" in hash function originated like so: The term "hash" comes by way of analogy with its non-technical meaning, to "chop and mix". Indeed, typical hash functions, like the mod operation, "chop" the input domain into many sub-domains that get "mixed" into the output range to improve the uniformity of the key distribution.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108124", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
108,133
I personally stay awake late at night, coding and enjoying working on personal projects. My other colleagues also feel the same and like coding at night. However, it's not about being passionate about personal hobbies, rather, I really feel that I'm more productive at night. I think that there is something about night, maybe its darkness, maybe its silence, maybe another attribute that makes developers become more productive. Is there some truth to this? Why do some developers believe that they are more productive at night? Is there any scientific proof to justify this proposition? Maybe something like "in night, monitor light is less harmful" or "the natural air in night has more oxygen, thus is more suitable for thinking process", or anything like that. Moderator Note: The question is asking for scientific proof and otherwise cited information on this subject. Answers that do not provide supporting references will be removed. This is not a poll where you should share when you wake up and what parts of the day you personally are productive.
As pointed out in a comment by SK-Logic, there is some scientific evidence to back this up. From wikipedia's article on night owls : Researchers have found that 'differences in a fundamental property of the circadian timing system , its intrinsic period, will determine whether someone is an early bird, who awakens before dawn or a night owl, who tends to stay up late at night but sleeps in late'. This is an indication that some people would prefer to work at night. This interesting paper studies the productivity of a programmer over time . The sequence of phases is: euphoric, productive, irreplaceable, resentful, bored, and unproductive. Overall productivity is characterized by an initial six month period of intense interest , at which time productivity rates are often an order of magnitude higher than the oft-quoted 500 LOC/month average. After a short period of volatility, the programmer then enters a prolonged phase of steadily dwindling interest, resulting in productivity rates that mimic the average. Taking this into account, and considering that a programmer usually works on individual projects at night, a simple reason could be that it's this 'euphoric' drive for short-term individual projects that makes them productive, causing the desire to stay awake and continue work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108133", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
108,240
I have been studying and coding in C# for some time now. But still, I can't figure the usefulness of Interfaces. They bring too little to the table. Other than providing the signatures of function, they do nothing. If I can remember the names and signature of the functions which are needed to be implemented, there is no need for them. They are there just to make sure that the said functions(in the interface) are implemented in the inheriting class. C# is a great language, but sometimes it gives you the feeling that first Microsoft creates the problem (not allowing multiple inheritance) and then provides the solution, which is rather a tedious one. That's my understanding which is based on limited coding experience. What's your take on interfaces? How often you make uses of them and what makes you do so?
They are there just to make sure that the said functions (in the interface) are implemented in the inheriting class. Correct. That's a sufficiently awesome benefit to justify the feature. As others have said, an interface is a contractual obligation to implement certain methods, properties and events. The compelling benefit of a statically typed language is that the compiler can verify that a contract which your code relies upon is actually met. That said, interfaces are a fairly weak way to represent contractual obligations. If you want a stronger and more flexible way to represent contractual obligations, look into the Code Contracts feature that shipped with the last version of Visual Studio. C# is a great language, but sometime it gives you the feeling that first Microsoft creates the problem(not allowing multiple inheritance) and then provides the solution, which is rather a tedious one. Well I'm glad you like it. All complex software designs are a result of weighing conflicting features against each other, and trying to find the "sweet spot" that gives large benefits for small costs. We've learned through painful experience that languages that permit multiple inheritance for the purposes of implementation sharing have relatively small benefits and relatively large costs. Permitting multiple inheritance only on interfaces, which do not share implementation details, gives many of the benefits of multiple inheritance without most of the costs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108240", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32803/" ] }
108,424
I've been given a non-disclosure agreement (NDA) to sign by my current employer that I do not want to sign. It is incredibly open ended and I feel that it should have been a condition of my initial employment agreement, which I signed three weeks ago. The document contains very many definitions in the form of "including but not limited to," and "directly or indirectly." As well, it states that: I agree that any breach of the contract would inflict irreparable harm on the company (I agree that a breach may result in harm to the company but not necessarily irreparable harm). Should the document be amended any time in the future and I refuse to sign the amendment that I would be in violation of its terms. Everything I develop while under the company's employment is its property (neglecting to say whether what I develop on my own time, distinctly from my work, is my own). After my employment ends at the company I would be required to continue my duties there to assist in the perfection of its software, that I would not be allowed to perform any duties directly or indirectly related to my duties there for one year after my employment at the company ends (would I not be allowed to engineer software for a year?). Should the company decide to prosecute me for breach of contract that I agree not to defend myself, and that all terms in the document would be transferred to any company which purchases the one I work for without my consent. In short, they would legally own me for life and could absolutely destroy me for any reason they deemed fit. Are there any legal arguments that I could use to defend myself against signing the contract? For example, the fact that the NDA was not part of my initial employment agreement or that the document is intentionally abstract and vague to allow them to fill in the blanks any way they please? The company hosts the entirety of its source code on a site which employs a publicly accessible SCM and greatly embraces open source software — the chances that I would ever come in contact with information that could legally be considered a "trade secret" or "confidential information" is very slim, so why would I need to sign the NDA? I do not believe that many of the employees there actually took the time to understand the NDA before they signed it, and know for a fact that a few of them did not. Are the terms of this agreement commonplace among the software engineering community?
The clauses you mention come from several different standard contracts. An NDA basically covers "anything we tell you, you can't tell to anyone else, no matter what". There are some standard exceptions to this (which should be explicitly listed in the NDA). These standard exceptions are: Publicly available knowledge from some other source. Things that you have learned independently from some other source. Anything the company gives you permission in writing to talk about. The NDA covers things such as company secrets, know-how, source code, and other bits of knowledge. And a company absolutely can suffer irreparable harm if some of these things are made public. No company will employ you without having you sign one, and you generally won't be able to negotiate any of the points on this -- your choices will probably be to either sign it, or not work for the company. Now, you bring up a few other clauses, notably an Intellectual Property Assignment device, which assigns rights to what you develop to the company. Most (but not all) companies include a clause in that which states "which relates to the business of the Company". If that's there, and your home project doesn't have anything to do with the work done by the company, then you're pretty much okay. If not, then you may be able to negotiate to amend the contract; companies are usually far more willing to amend this part of the contract, than the NDA. (I've had success at modifying exactly this clause at my last two workplaces). But do be aware that this isn't a cut and dry issue. Lots more insightful commentary on the ownership of side-projects here: If I'm working at a company, do they have intellectual property rights to the stuff I do in my spare time? Finally, you mention a non-compete clause, with a one-year duration. This is a standard clause, but with an absurdly long duration -- more usually it's a one or two month duration, but asking for a year is completely abusive. You should definitely not sign the contract in that form. Further discussion on this issue here: http://www.joelonsoftware.com/articles/fog0000000071.html To directly answer your question: Yes, these terms are very common within the software engineering community. Though all the clauses you mentioned are standard, some of them of them sound a bit more severe than usual. The important thing is to be aware that a contract is a meeting of minds and a negotiation. You don't have to sign something if you're not happy with it, and you can absolutely propose alterations to the contract before you sign it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108424", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36692/" ] }
108,425
I would like to run SVN server on my local machine along with WAMP. All SVN server binaries include Apache. Whereas I don't need Apache as I'm already running WAMP. Please let me know if there are any installs available that includes only the SVN server. If there are none available, shall I go ahead with what's available? And how would I do that without breaking anything. PS: I tried CollabNet and VisualSVNServer. Both includes Apache.
The clauses you mention come from several different standard contracts. An NDA basically covers "anything we tell you, you can't tell to anyone else, no matter what". There are some standard exceptions to this (which should be explicitly listed in the NDA). These standard exceptions are: Publicly available knowledge from some other source. Things that you have learned independently from some other source. Anything the company gives you permission in writing to talk about. The NDA covers things such as company secrets, know-how, source code, and other bits of knowledge. And a company absolutely can suffer irreparable harm if some of these things are made public. No company will employ you without having you sign one, and you generally won't be able to negotiate any of the points on this -- your choices will probably be to either sign it, or not work for the company. Now, you bring up a few other clauses, notably an Intellectual Property Assignment device, which assigns rights to what you develop to the company. Most (but not all) companies include a clause in that which states "which relates to the business of the Company". If that's there, and your home project doesn't have anything to do with the work done by the company, then you're pretty much okay. If not, then you may be able to negotiate to amend the contract; companies are usually far more willing to amend this part of the contract, than the NDA. (I've had success at modifying exactly this clause at my last two workplaces). But do be aware that this isn't a cut and dry issue. Lots more insightful commentary on the ownership of side-projects here: If I'm working at a company, do they have intellectual property rights to the stuff I do in my spare time? Finally, you mention a non-compete clause, with a one-year duration. This is a standard clause, but with an absurdly long duration -- more usually it's a one or two month duration, but asking for a year is completely abusive. You should definitely not sign the contract in that form. Further discussion on this issue here: http://www.joelonsoftware.com/articles/fog0000000071.html To directly answer your question: Yes, these terms are very common within the software engineering community. Though all the clauses you mentioned are standard, some of them of them sound a bit more severe than usual. The important thing is to be aware that a contract is a meeting of minds and a negotiation. You don't have to sign something if you're not happy with it, and you can absolutely propose alterations to the contract before you sign it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108425", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28239/" ] }
108,447
I am a developer of an open source project which is hosted in SourceForge. It started out as a little app then after some releases, it got more and more popular and it started consuming more time and responsibility from me. So I have enabled the donation option in SourceForge. I'm passionate to continue developing it for free but if (ever) any money comes in, how should I split it with my team? Should I split the amount equally among the number of team members? (50-50 as it is two-member team now) Number of classes, commits or any other valuable submissions by team members? Any other idea? What would you do in such situation? Please give your opinions. I hope this question will be useful for others.
I recommend not distributing it to project members at all. Appoint or elect a treasurer, open an account and deposit the money to earn simple interest. If you distribute donated funds between developers, at least one will become disenfranchised as the project grows. Instead, consider the other possibilities for the funds: Swag. Print up some T shirts to send to people who have made more than trivial contributions to the code. Bounties. Put a reward on the feature the community really wants but nobody really feels like implementing. This is also a great way to get new long term contributors. Or, 'sweeten the pot' a bit for fixing a really perplexing bug. The bounties don't have to be cash, especially if you have swag to give. It also doesn't have to be swag, a Pi in hand is worth 10 in the oven. Hardware. Buy stuff that the community owns that all developers can use. This could be servers, or gadgets that are shared through the mail. Tools/Licensing. You might need to pay for software, even being an open source project. You might need to buy a copy of Acme Widgets to study it for the purpose of creating an open alternative, or you might need to help a great contributor upgrade their development environment. Events. Help send your developers to conferences or key events when opportunities present themselves. Or, sponsor your own meetup if enough people would be able to attend. Legal Fees. While (thankfully) still relatively uncommon 1 , you might find yourself in litigation for a number of reasons. It's good to have some money put away. This could be simply enforcing your license or copyright, or defending against something else. There are so many ways that the money could be spread so everyone benefits, it really doesn't make much sense to limit the good it could otherwise do. If you get to the point that donations and community support make hiring some of the most prolific developers to work on the project full time reasonable, it means you should be looking at the project as more of a business than a hobby. 1 Litigation in open source is a lot like plane crashes. You read about the horror stories and drama in the news when something happens, but don't forget the nearly millions of projects that have and will continue to exist without any issue whatsoever
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108447", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36656/" ] }
108,613
The descriptor 'Engine' gets thrown around a lot: graphics engine, RegEx engine, AI engine, etc. but what actually makes a piece of software an Engine? Design, Input/Output, Purpose, Size?
An engine would be something that is "under the hood", so to speak. It is not, or at least very rarely is, visible to the end user. A graphics engine, for instance, drives all the rendering calculations but passes those changes on to the actual environment to be modeled. Input: math. Output: pretty colors. An engine might also have very different working variables than a more high level interface. For instance, in the previous example, it is using raw numerical data to manipulate graphics without worrying about whether something is a shadow or a texture, all of that is abstracted into the equations and matrix operations to be performed by that engine. Think of the Engine as the "Kernel" of a given system while the rest would be more like the "Shell". To use a real world, CS101 analogy, an engine is just like a car engine. It takes two inputs, air and gas. It then passes them into a chamber, whereupon electricity is used to generate the world's smallest use of arc welding. Stuff then explodes. This produces two outputs, exhaust and a pressure wave which drives a piston. The rest is transferred into wheel motion by the various drive shafts and such. So the Engine is the engine and the car itself is the shell. You could use a car engine for a different purpose, say driving a generator for electricity or a mill to grind grain. You could use different inputs if the Engine has the coatings and such to handle things like ethanol or biodiesel. To sum it up, an Engine is a piece of software that is usually not found in isolation. It acts as motive force for that piece of software but typically interacts very little if at all with the outside world. Several engines may work together to produce complimentary outputs or may be pipelined together as needed. An engine does not do things related to user experience in an aesthetic sense directly but drives those experiences none the less by motivating dataflow and being responsive enough to allow for good application performance.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108613", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7256/" ] }
108,671
For websites that need to be highly scalable, such as social networks like facebook, what's the best way to design the website? Should I have a web service which the site queries to get data that it needs? or Should the site query databases directly? (can be done using built in language constructs to fill tables automatically etc). I would think the web service is the better design since it gives centralized data access and things like caching and the like become much easier to control, but what do others think?
Wow, this is a simple question, which a huge array of possible answers. The more explicit part of your question asks whether it is more scalable to interface with your database directly or through a web service. That answer is simple: query the database directly. Going through the web service adds a whole bunch of latency that is completely unnecessary for code operating behind a firewall (by and large). A web service for example requires some component to receive a request, deserialize it, query the DB, serialize a response and return it. So if your code is all operating behind a firewall, save yourself the trouble and just query the DB directly. Making a web site scalable however goes way beyond the question you initially posed. So forgive me if I go off on a tangent here, but I thought it might be useful considering that you mentioned Facebook in particular. I would recommend you read up on the work and tools built by Brad Fitzpatrick (founder of LiveJournal and now at Google). When I worked with him at Six Apart, here are some of the things I learned from him, and about LiveJournal's architecture that made it so scalable. Use narrow database tables as opposed to wide ones . What was fascinating about this was learning what motivated this architecture, which was creating a system that was easily and quickly upgraded. If you use wide tables, or tables for which each field or property is a column in the table, when it comes time to upgrade the database schema, for example adding a new column, then the system will need to lock the table while the schema change is implemented. When operating at scale this would mean a simple change to the database schema could result in a large database outage. Which sucks obviously. A narrow table on the other hand simply stores each individual property associated with an object as a single row in the database. Therefore when you want to add a new column to the database all you need to do is INSERT records into a table, which is a non-locking operation. Ok, that is a little background, let's see how this model actually translates in working system like LiveJournal. Let's say you want to load the last 10 journal entries on a person's blog, and let's say each journal entry has ten properties. In a classic wide table layout, each property would correlate to a column on a table. A user would then query the table once to fetch all of the data they need. The query would return 10 rows and each row would have all the data they need (e.g. SELECT * FROM entries ORDER BY date LIMIT 10). In a narrow table layout however things are bit different. In this example there are actually two tables: the first table (table A) stores simple criteria one would want to search by, e.g. the id of the entry, the id of the author, the date of the entry, etc. A second table (table B) then stores all of the properties associated with an entry. This second table has three columns: entry_id, key and value. For every row in table A, there would be 10 rows in table B (one row for each property). Therefore in order to fetch and display the last ten entries, you would need 11 queries. The first query gives you the list of entry IDs, and then the next ten queries would fetch the properties associated with each of the entries returned in the first query. "Holy moly!" you say, "how on Earth can that be more scalable?!" Its totally counter-intuitive right? In the first scenario we just had one database query, but in the second "more scalable" solution we have 11 database queries. That makes no sense. The answer to that question relies entirely upon the next bullet. Use memcache liberally. In case you were not aware, memcache is a distributed, stateless, low latency, network based caching system. It is used by Facebook, Google, Yahoo, and just about every popular and scalable web site on the planet. It was invented by Brad Fitzpatrick partially to help offset the database overhead inherent in a narrow table database design. Let's take a look at the same example as discussed in #1 above, but this time, let's introduce memcache. Let's begin when a user first visits a page and nothing is in the cache. You begin by querying table A which returns the IDs of the 10 entries you want to display on the page. For each of those entries you then query the database to retrieve the properties associated with that entry, and then using those properties constitute an object that your code can interface with (e.g. an object). You then stash that object (or a serialized form of that object) in memcache. The second time someone loads the same page, you begin the same way: by querying table A for the list of entry IDs you will display. For each entry you first go to memcache and say, "do you have entry #X in the cache?" If yes, then memcache returns the entry object to you. If not, then you need to query the database again to fetch its properties, constitute the object and stash it in memcache. Most of the time, the second time someone visits the same page there is only one database query, all other data is then pulled straight from memcache. In practice, what ended up happening for most of LiveJournal is that most of the system's data, especially the less volatile data, was cached in memcache and the extra queries to the database needed to support the narrow table schema were all but completely offset. This design made solving the problem associated with assembling a list of posts associated with all of your friends into a stream, or "wall" much, much easier. Next, consider partitioning your database. The model discussed above surfaces yet another problem, and that is your narrow tables will tend to be very large/long. And the more rows those tables have the harder other administrative tasks become. To offset this, it might make sense to manage the size of your tables by partitioning the tables in someway, so that clusters of users are served by one database, and another cluster of users are served by a separate database. This distributes load on the database and keeps queries efficient. Finally, you need awesome indexes. The speed of your queries will depend largely upon how well indexed your database's tables are. I won't spend too much time discussing what an index is, except to say that it is a lot like a giant card catalog system to make finding needles in a haystack more efficient. If you use mysql then I recommend turning on the slow query log to monitor for queries that take a long time to fulfill. When a query pops up on your radar (e.g. because it is slow), then figure out what index you need to add to the table to speed it up. "Thank you for all of this great background, but holy crud, that is a lot of code I will have to write." Not necessarily. Many libraries have been written that make interfacing with memcache really easy. Still other libraries have codified the entire process described above; Data::ObjectDriver in Perl is just such a library. As for other languages, you will need to do your own research. I hope you found this answer helpful. What I have found more often than not is that the scalability of a system often comes down less and less to code, and more and more to a sound data storage and management strategy/technical design.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108671", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36819/" ] }
108,740
I'm sure lots of developers are familiar with XML and JSON , and they've used both of them. Thus no point in explaining what they are, and what is their purpose, even in brief. If we try to map their concepts, we can say (correct me if I'm wrong): XML tags are equivalent to JSON {} XML attributes are equivalent to JSON properties XML tag collection is equivalent to JSON [] The only thing I can think of, which doesn't exist in JSON, is XML Namespaces . The question is, considering this mapping, and considering that JSON is highly lighter in this mapping, can we see a world in future (or at least theoretically think of a world) without XML, but with JSON doing everything XML does? Can we use JSON everywhere XML is used? PS: Please note that I've seen this question. It's something entirely different from what I'm asking here. Thus please don't mention duplicate .
The thing that gives XML its power and a lot of its complexity is mixed content. Stuff like this: <p>A <b>fine</b> mess we're in!</p> Don't even try to do that in JSON, or manipulate it in conventional programming languages. They weren't designed for the job. This kind of question usually comes from people who forget that that the M in XML stands for markup. It's a way of taking plain text and adding markup to create structured text. It's quite handy for old-fashioned data too, but that's not what it was designed for or where its main strengths lie. There are plenty of ways of handling simple data, and JSON is one of them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108740", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
108,768
At which point should YAGNI take precedence against good coding practices and vice versa? I'm working on a project at work and want to slowly introduce good code standards to my co-workers (currently there are none and everything is just kind of hacked together without rhyme or reason), but after creating a series of classes (we don't do TDD, or sadly any kind of unit testing at all) I took a step back and thought it's violating YAGNI because I pretty much know with certainty that we don't require the need to extend some of these classes. Here's a concrete example of what I mean: I have a data access layer wrapping a set of stored procedures, which uses a rudimentary Repository-style pattern with basic CRUD functions. Since there are a handful of methods that all my repository classes need, I created a generic interface for my repositories, called IRepository . However, I then created a "marker" interface (i.e. interface that doesn't add any new functionality) for each type of repository (e.g. ICustomerRepository ) and the concrete class implements that. I've done the same thing with a Factory implementation to build the business objects from DataReaders/DataSets returned by the Stored Procedure; the signature of my repository class tends to look something like this: public class CustomerRepository : ICustomerRepository { ICustomerFactory factory = null; public CustomerRepository() : this(new CustomerFactory() { } public CustomerRepository(ICustomerFactory factory) { this.factory = factory; } public Customer Find(int customerID) { // data access stuff here return factory.Build(ds.Tables[0].Rows[0]); } } My concern here is that I'm violating YAGNI because I know with 99% certainty that there is never going to be a reason to give anything other than a concrete CustomerFactory to this repository; since we don't have unit tests I don't need a MockCustomerFactory or similar things, and having so many interfaces might confuse my co-workers. On the other hand, using a concrete implementation of the factory seems like a design smell. Is there a good way to come to a compromise between proper software design and not overarchitecting the solution? I'm questioning if I need to have all of the "single implemenation interfaces" or if I could sacrifice a bit of good design and just have, for example, the base interface and then the single concrete, and not worry about programming to the interface if the implementation is that will ever be used.
Is there a good way to come to a compromise between proper software design and not overarchitecting the solution? YAGNI. I could sacrifice a bit of good design False assumption. and the base interface and then the single concrete, That's not a "sacrifice". That is good design.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/108768", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/22390/" ] }
109,107
I'm curious to know what the prevailing best practice is. Should git commits be enforced such that the project is in a working state (builds properly, all tests pass etc), or is committing broken code OK? For example, if you waive this requirement you can be more flexible with commits (use them as logical chunks, even though the app is not in a working state etc). However if you enforce it you gain the flexibility of being able to cherry-pick any given commit later on...
This workflow gives good results for most large software projects that follow some version of the the Atlassian Git Flow model Each merge to the branch from which the release is cut must leave the project in a working state. In Git Flow, this is called the master branch, and usually only the release engineers can merge code there. Run all your tests for each merge; Each merge to the mainline development branch (in Git Flow it's called develop branch, but many use the master branch for that purpose) should leave the project in a working state (and it must build at least). Run most of your tests, except the longest running ones, accept some flakiness, don't run unaffected tests. But sometimes you'll merge two branches that both pass tests, but their merge is broken. This is okay. You'll fix this soon, and you'll make sure your release branches are good. Each other individual commit has a primary goal of explaining why the change is made, and what is it for, and what parts of the project it affected. All other goals, such as leaving the project in a working state, are optional.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109107", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37008/" ] }
109,281
I have close to 3 years experience writing web applications in Java using MVC frameworks (like struts). I have never written multithreaded code till now though I have written code for major retail chains. I get a few questions on multithreading during interviews and I answer them usually (mostly simple questions). This left me wondering how important is Multithreading in the current industry scenario ?
It is extremely important. What is more important though is to understand that multithreading is just one way to solve the asynchrony problem. The technical environment in which many people are now writing software differs from the historical software development environment (of monolithic applications performing batch computations) in two key ways: Many-core machines are now common. We can no longer expect clock speeds or transistor densities to increase by orders of magnitude. The price of computation will continue to fall, but it will fall because of lots of parallelism. We're going to have to find a way to take advantage of that power. Computers are now heavily networked and modern applications rely upon being able to fetch rich information from a variety of sources. From a computational standpoint, these two factors essentially boil down to the same core idea: information increasingly will be available in an asynchronous fashion . Whether the information you need is being computed on another chip in your machine or on a chip halfway around the world doesn't really matter. Either way, your processor is sitting there burning billions of cycles a second waiting for information when it could be doing useful work. So what matters now, and what will matter even more in the future, is not multithreading per se, but rather, dealing with asynchrony . Multithreading is just one way to do that -- a complicated, error-prone way that is only going to get more complicated and more error-prone as weak-memory-model chips become more widely used. The challenge for tools vendors is to come up with some way better than multithreading for our customers to deal with the asynchronous infrastructure they'll be using in the future.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109281", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31895/" ] }
109,325
I recently had a negative experience, where client bailed from the bill, but my middle-man already uploaded our software and design to the clients server. Client turned out be a known criminal, and of course he changed all possible passwords of the server. However, I can still access the admin-panel of the CMS. Sadly it turns out, that my software is very secure.. Tried to SQL-injection, fake the image-upload etc etc. However, I cannot hack my own software.. Anyways, I preparing to sue this person, so that is not the problem.. I'm just thinking now, that maybe there should be some backend selfdestruct-method. So, if similar case occur, I have the option to kill the software. My own idea is to hide some function in the core files. Encode it with base64, so it wouldn't be obvious. So something like this: eval(base64_decode('ZWNobyAnSGVsbG8gd29ybGQhJzs=')); // echo 'Hello world!'; And basically make a small script, that takes all the software's files, chmod's them to be sure and then deletes them. My newer versions of the CMS, all have filemanagers that I could use for easier hacking . But what if the access to the admin-panel is limited. To be very clear , this is meant only for the development-stage software's, in my personal server or the clients server (last part being the ethically questionable.) So if my client should steal my software.. This wont be included inside a commercial-software. And to be even more clear , we are talking about those rare freelance jobs. I think its fairly logical, that contract-work doesn't need such methods. So we are talking about those jumprisk-clients, only in development mode -- when the project is ready, then obviously this would be a very very unethical backdoor to have inside your software. Ethically is this a good idea? (Keeping in mind, that obviously I will remove it, when project has been 100% and everything is paid for) Have you guys ever had to hack your own software, because of similar issues with the client(s)? Any recommendations on this idea, code and method wise? What may be the possible drawbacks or repercussions of selfdestruct-scripts? My conclusion on this Is a little bit sad, that all answers were targeted on the contracted cases. It was my fault really, that I didn't make it more clear in my question.. just thought, that it is fairly clear, that there is no point in the kill-switch.. when you are protected by the contract. However, if you are doing a contract work.. then this should be stated in the contract -- this makes it legal, even inside the clients own server. However, having kill-switches inside my own personal server is really nobodies business (this is what I really wanted to know.) I decided to make the kill-switch script for my CMS. Mainly, because it seems an interesting challenge. But also, that I could use this for my non-contracted works where the client is a friend of a friend of a friend.. I probably wont use this on the clients server, but..for the cases, where the client or some middlemen have access to my server .. And my software gets stolen, or "moved without my knowledge", then I don't get paid and they cut the access to the software. I have read trough alot of topics here, where they recommend to send a warning and then take down the page. Well, I saw a problem in it, as when Im dealing with a person.. who will just copy it to somewhere else (maybe re-brand it and sell it) and tells me, that it has been taken down. And also, I wouldn't "turn the site off", but delete it. Though, I guess its still illegal to access my clients server and to delete it. Or at-least, access it trough backend and not from the FTP. For this I thank all of you, who answered.
I'm not a lawyer. It sounds like you already have one for the purposes of suing your client; while you have him or her on retainer I would recommend getting their advice on this. There are some other questions on this site that deal with "kill switches" and other ways to disable software for which the developer has not received compensation. It is usually considered a bad idea to simply build one in to "turnkey" software (where you will develop it and then transfer full rights to the client), without the contract having stipulated this possibility. First off, if your contract does not specifically state that you can disable the software for non-payment, or that the client does not have any rights to the software until payment is received in full, then you cannot flip any "kill switch" without being in breach of contract. Absent any words to the contrary, "possession is nine-tenths of the law", so it's his software once he is given possession, and to destroy it would be akin to dynamiting a new office building you'd built for him if he didn't pay for it. The second point follows; any contract you offer to any client should have a clause to the effect of: "Intellectual property transfers on satisfaction of contract" . That means that even if you have given him a copy of the software to use, until he's paid you in full, he doesn't own it. This WOULD give you the right to disable his or any copy of the software for any reason until full payment has been received, because it's still yours and you can do as you please. Now, he's breached the contract, and you haven't, so the case is MUCH easier for your lawyer to present, and meanwhile your client doesn't get any benefit from his ill-gotten goods. The analogy to a building contractor holds: once a building under construction is able to be secured against unlawful entry, it is, and the contractor will generally keep all copies of all keys to the premises until the work is complete and signed off on, and payment received in full. Even after the keys are handed over, if payment falls through he can attach a lien on the property and in the extreme have it repossessed. The same holds true here; you may give the client a key to get into the software, but you hold the "master" key, and he doesn't get administrative access until you're paid in full. If he can get in now, and doesn't pay you, you can just "change the locks" and lock him out of the software. However, you have given your client the "master" key to the software, and he's gone and changed all the locks so now YOU can't get in. That's not the way it should work. You can still claim damages, but in the meantime your crooked client can use the software, copy it elsewhere (that's a big thing that can't happen to a contractor; if he takes his building back he doesn't have to worry that you've made an exact free copy on another lot), etc etc. Basically, your only remedy is to enforce payment in full, because you cannot guarantee that you have reclaimed all copies of the software. You probably wouldn't be happy getting your software back even if you could guarantee he had no further copies; it's likely custom work you can't just turn around and sell to someone else. Understand that regardless of your rights to the software, his data belongs to him. You cannot touch it. You can stop his access to the software that you built, but if you destroy his data, that's like burning his possessions after repoing the building you built him that he didn't pay for. You have no right whatsoever to that data, and must either leave it in place on his computer intact, or if the data cannot be accessed in a reasonable manner without your software, you must remove it from the entanglement with your software and give it to him in a useable format (such as a human-consumable database, or printed or electronic copies).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109325", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28325/" ] }
109,420
The problem with trying to use Google to find tutorials or answers for the C programming language is that C is not an expressive enough name to narrow down the searches. Even coupled with keywords like "Programming" and/or "Language" yields results mostly for C++, C#, and Objective-C. Is there a way to more effectively search for specific C resources using Google?
You can use the + or - signs to add or remove weight for a search term. However the best place to search really isn't google at all, it's StackOverflow A few google examples anyway: +C for articles where the letter C stands alone +C -C++ for C articles where there are no references to C++ +"C Sharp" for articles with weight added to a grouped term
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109420", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6640/" ] }
109,436
Possible Duplicate: Why isn't the line count in Visual Studio zero-based? Why isn't the top line in a source code file labelled line number 0? i.e. in a source file which was 10 lines long, I would expect line numbers would be 0-9, but they're not! Now grep , awk , sed etc all seem to number from 1, so I'm assuming it's an established convention for a good reason. It occurred to me recently that, for almost every other structure a programmer has to deal with in their day to day work, the convention is to count from 0 - and a source file is as much like a list of lines as a line is like a list of chars (afaik list of chars is invariably 0-indexed). I had to actually open up my text editor and sanity check that the top line of a file was labelled line number 1! I was just wondering if there is a good reason for this or if it's just an unfortunate historical convention.
0-based indexing is for computers not humans line numbers are for humans not computers all humans are not programmers 1-based line numbers ease readability a tiny bit -- otherwise every time you saw an error message with a line number, you'd have to remember to decrement it by one to get to the actual line that caused the error -- and that impedes readability. Also line 1 reads better than line 0 -- because others (people who may be non-technical) may have to read your code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109436", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30395/" ] }
109,442
Are there any major differences when we talk about "socket programming" compared to "network programming"? Are there some topics that cover "network programming" but not "socket programming"?
Socket programming (at least as the term is normally used) is programming to one specific network API. Sockets support IP-based protocols (primarily TCP and UDP) 1 . Network programming can be done using various other APIs. Windows has a number of protocol-independent APIs such as the WNet* and Net* functions. Older versions of Windows also used NetBIOS/NetBEUI (NetBIOS End User Interface), and most supported (and probably still do) IPX/SPX (an old Netware protocol). Most current network programming, however, is done either using sockets directly, or using various other layers on top of sockets (e.g., quite a lot is done over HTTP, which is normally implemented with TCP over sockets). TCP/IP and UDP/IP (as well as a number of other IP-based protocols) are done primarily via the sockets interface. In theory, other programming interfaces could be used, but in practice sockets seem to be sufficient, so there's not a lot of interest in replacing it. I should, however, mention that Windows sockets (WinSock) have quite a few extensions that are more or less unique to Windows. I suppose it's open to some argument whether code that uses these extensions really qualifies as "sockets" code or not -- they are extensions based on the same concepts, but code that uses them isn't normally portable to other systems. I guess whether it qualifies as "sockets" or no depends primarily on whether you think of sockets more as a concept, or a very specific set of functions, parameters, etc. Edit (in reply to comment): It's a bit hard to say whether "knowing sockets" implies knowing "everything" about TCP and UDP. Let's consider just one small piece of things: one typical demo program for sockets is creating a client/server chat program. The client connects to the server, and when the user on one client types something, it gets forwarded to the other clients that are connected to the same server. Each client displays what comes in from the server, and lets the user type in messages to be sent to the other clients. At the same time, consider what a "real" chat program like AIM, Windows Messenger, iChat, etc. involves. To handle not only text, but voice, video, file transfers, groups, lists, etc., a typical program probably involves a dozen different standards, including such things as SIP, STUN, TURN, RTCP, RTP, XAMPP, mDNS, etc. IMO, somebody who "knows sockets" should be able to code up the first (demo-level, text-only) chat program in a few hours without spending much time in help files (and such) doing research. Unless they claimed at least some prior experience working on a "real" chat program, I wouldn't expect them to even know which RFCs/standards applied to such things though. The same applies in general: given the number of RFCs (and various other standards) that get applied to all the different things people do over networks, it's unreasonable to expect anybody to have memorized all of them. Nonetheless, if you have a set of requirements for something that you'd expect people to be able to handle in a "local" program easily, just adding "over the network" as a requirement shouldn't normally add a tremendous amount of difficulty (though dealing with issues like network latency might). 1 Sockets on Unix also support Unix-family sockets, but these are (at least normally) used for intra-machine IPC, not networking. There are also literally dozens of other protocols for such things as router management that sockets don't really support (beyond raw sockets allowing you to build and send arbitrary packets).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109442", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23355/" ] }
109,523
I know that Subversion (what we're using at work) can be configured to require comments on commits, however I'm not in a position of power to simply turn this on. I know that my reason for commenting my commits is because it is useful, if only as a memory-jogger, to quickly understand the reason behind the commit. However, this doesn't seem to be enough to combat the two responses I always get: It takes too long and I just want to get my changes into the repo. It's easy enough to just look at the diffs. I even show them the value of simply putting in a JIRA issue ID and how it automatically gets tied to the issue, but still no dice with them. Worst of all, the person who can make the call is in the same camp: doesn't want to bother and is fine with looking at diffs. I know it's the right thing to do, but how can I make them see the light? Even if I can't convince my fellow devs, how can I convince management that it's the right thing to do for the business?
Focus on "Why". Its all very well looking at the diffs and seeing that someone changed the logical flow of a section of code or something like that, but why did they change it? The why is usually in the associated ticket (JIRA for you). They may wonder why the "Why" is important but in 2 years time when you have caught some bug that is a knock on effect of that change, knowing why it was done is incredibly important for not only fixing your new bug, but making sure you don't cause the old bug to re-emerge. There is also the auditing reason. Binding commits and ticket id's make it really easy to say ok, we're pushing out Version 2, this fixes defect 23, 25, 26 and 27 but there are no commits against defect 24 so it is still outstanding.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109523", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3785/" ] }
109,818
Browsing through some code I've written, I came across the following construct which got me thinking. At a first glance, it seems clean enough. Yes, in the actual code the getLocation() method has a slightly more specific name which better describes exactly which location it gets. service.setLocation(this.configuration.getLocation().toString()); In this case, service is an instance variable of a known type, declared within the method. this.configuration comes from being passed in to the class constructor, and is an instance of a class implementing a specific interface (which mandates a public getLocation() method). Hence, the return type of the expression this.configuration.getLocation() is known; specifically in this case, it is a java.net.URL , whereas service.setLocation() wants a String . Since the two types String and URL are not directly compatible, some sort of conversion is required to fit the square peg in the round hole. However , according to the Law of Demeter as cited in Clean Code , a method f in class C should only call methods on C , objects created by or passed as arguments to f , and objects held in instance variables of C . Anything beyond that (the final toString() in my particular case above, unless you consider a temporary object created as a result of the method invocation itself, in which case the whole Law seems to be moot) is disallowed. Is there a valid reasoning why a call like the above, given the constraints listed, should be discouraged or even disallowed? Or am I just being overly nitpicky? If I were to implement a method URLToString() which simply calls toString() on a URL object (such as that returned by getLocation() ) passed to it as a parameter, and returns the result, I could wrap the getLocation() call in it to achieve exactly the same result; effectively, I would just move the conversion one step outward. Would that somehow make it acceptable? (It seems to me, intuitively, that it should not make any difference either way, since all that does is move things around a little. However, going by the letter of the Law of Demeter as cited, it would be acceptable, since I would then be operating directly on a parameter to a function.) Would it make any difference if this was about something slightly more exotic than calling toString() on a standard type? When answering, do keep in mind that altering the behavior or API of the type that the service variable is of is not practical. Also, for the sake of argument, let's say that altering the return type of getLocation() is also impractical.
The problem here is the signature of setLocation . It's stringly typed . To elaborate: Why would it expect String ? A String represents any kind of textual data . It can potentially be anything but a valid location. In fact, this poses a question: what is a location? How do I know without looking into your code? If it were a URL than I would know a lot more about what this method expects. Maybe it would make more sense for it to be a custom class Location . Ok, I wouldn't know at first, what that is, but at some point (probably before writing this.configuration.getLocation() I would take a minute to figure out what it is this method returns). Granted, in both cases I need to look some place else to understand, what is expected. However in the latter case, if I understand, what a Location is, I can use your API, in the former case, if I understand, what a String is (which can be expected), I still don't know what your API expects. In the unlikely scenario, that a location is any kind of textual data I would reinterpret this to any kind of data, that has a textual representation . Given the fact, that Object has a toString method, you could go with that, although this demands quite a leap of faith from the clients of your code. Also you should consider, that this is Java you're talking about, which has very few features by design. That's what's forcing you to actually call the toString at the end. If you take C# for example, which is also statically typed, then you would actually be able to omit that call by defining behavior for an implicit cast . In dynamically typed languages, such as Objective-C, you don't really need the conversion either, because as long as the value behaves like a string, everybody is happy. One could argue, that the last call to toString is less a call, than actually just noise generated by Java's demand for explicitness. You're calling a method, that any Java object has, therefore you do not actually encode any knowledge about a "distant unit" and thereby don't violate the Principle of Least Knowledge. There is no way, no matter what getLocation returns, that it doesn't have a toString method. But please, do not use strings, unless they are really the most natural choice (or unless you're using a language, that doesn't even have enums ... been there).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/109818", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6384/" ] }