source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
179,488
I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?
You have multiple cores/procesors, use them Async is best for doing heavy IO bound processing but what about heavy CPU bound processing? The problem arises when single-threaded code blocks (ie gets stuck) on a long-running process. For instance, remember back when printing a word processor document would make the whole application freeze until the job was sent? Application freezing is a side-effect of a single-threaded application blocking during a CPU-intensive task. In a multi-threaded application, CPU-intensive tasks (ex a print job) can be sent to a background worker thread thereby freeing up the UI thread. Likewise, in a multi-process application the job can be sent via messaging (ex IPC, sockets, etc) to a subprocess designed specifically to process jobs. In practice, async and multi-threaded/process code each have their benefits and drawbacks. You can see the trend in the major cloud platforms, as they will offer instances specialized for CPU bound processing and instances specialized for IO bound processing. Examples: Storage (ex Amazon S3, Google Cloud Drive) is CPU bound Web Servers are IO bound (Amazon EC2, Google App Engine) Databases are both, CPU bound for writes/indexing and IO bound for reads To put it into perspective... A webserver is a perfect example of a platform that is strongly IO bound. A multi-threaded webserver that assigns one thread per connection doesn't scale well because every thread incurs more overhead due to the increased amount of context switching and thread locking on shared resources. Whereas an async webserver would use a single address space. Likewise, an application specialized for encoding video would work much better in a multi-threaded environment because the heavy processing involved would lock the main thread until the work was done. There are ways to mitigate this but it's much easier to have a single thread managing a queue, a second thread managing cleanup, and a pool of threads managing the heavy processing. Communication between threads happens only when tasks are assigned/completed so thread-locking overhead is kept to a bare minimum. The best application often uses a combination of both. A webapp, for instance may use nginx (ie async single-threaded) as a load balancer to manage the torrent of incoming requests, a similar async webserver (ex Node.js) to handle http requests, and a set of multi-threaded servers handle uploading/streaming/encoding content, etc... There have been a lot of religious wars over the years between multi-threaded, multi-process, and async models. As with the most things the best answer really should be, "it depends." It follows a the same line of thinking that justifies using GPU and CPU architectures in parallel. Two specialized systems running in concert can have a much greater improvement than a single monolithic approach. Neither are better because both have their uses. Use the best tool for the job. Update: I removed the reference to Apache and made a minor correction. Apache uses a multiprocess model which forks a process for every request increasing the amount of context switching at the kernel level. In addition, since the memory can't be shared across processes, each request incurs an additional memory cost. Multi-threading gets around requiring additional memory because it relies on a shared memory between threads. Shared memory removes the additional memory overhead but still incurs the penalty of increased context switching. In addition -- to ensure that race conditions don't happen -- thread locks (that ensure exclusive access to only one thread at a time) are required for any resources that are shared across threads. It's funny that you say, "programmers seems to love concurrency and multi-threaded programs in general." Multi-threaded programming is universally dreaded by anybody who has done any substantial amount of it in their time. Dead locks (a bug that happens when a resource is mistakenly locked by two different sources blocking both from ever finishing) and race conditions (where the program will mistakenly output the wrong result randomly due to incorrect sequencing) are some of the most difficult to track down and fix. Update2: Contrary to the blanket statement about IPC being faster than network (ie socket) communications. That's not always the case . Keep in mind that these are generalizations and implementation-specific details may have a huge impact on the result.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179488", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75167/" ] }
179,537
There are a lot of analogies for HTML/CSS development; which can be a bit confusing for a beginner. HTML = foundations/house CSS = walls/blueprint/wallpaper Is there any best practice here? Which one should we write first?
You should build a house first, then paint it. An HTML document can stand on its own, even though it may look dull. A CSS style sheet cannot; it is nothing displayable (except as code) but instructions for display. It’s a different issue that during painting, you may wish to do changes to the house. With real houses that’s usually not feasible, but in HTML+CSS development, it’s commonplace to notice that you need extra markup in your HTML document to make styling easier. (It’s less common than it used to be, thanks to powerful CSS3 selectors.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179537", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41089/" ] }
179,541
Many times there is the need to load external resources into the program, may they be graphics, audio samples or text strings. Is there a patten for handling the loading and the handling of such resources? For example: should I have a class that loads all the data and then call it everytime I need the data? As in: GraphicsHandler.instance().loadAllData() ...//and then later: draw(x,y, GraphicsHandler.instance().getData(WATER_IMAGE)) //or maybe draw(x,y, GraphicsHandler.instance().WATER_IMAGE) Or should I assign each resource to the class where it belongs? As in (for example, in a game): Graphics g = GraphicsLoader.load(CHAR01); Character c = new Character(..., g); ... c.draw(); Generally speaking which of these two is the more robust solution? GraphicsHandler.instance().getData(WATER_IMAGE) //or GraphicsHandler.instance().WATER_IMAGE //a constant reference
You should build a house first, then paint it. An HTML document can stand on its own, even though it may look dull. A CSS style sheet cannot; it is nothing displayable (except as code) but instructions for display. It’s a different issue that during painting, you may wish to do changes to the house. With real houses that’s usually not feasible, but in HTML+CSS development, it’s commonplace to notice that you need extra markup in your HTML document to make styling easier. (It’s less common than it used to be, thanks to powerful CSS3 selectors.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179541", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75150/" ] }
179,554
What are technical specifications? Are they the same as design documents. If not, what is the difference and some examples?
A software design document can be at the level of a system or component, and generally includes: relevant goals or requirements (functional and non-functional); static structure (e.g., components, interfaces, dependencies); dynamic behavior (how components interacts); data models or external interfaces (external to the system/component described in the document); and deployment considerations (e.g., runtime requirements, third-party components). Note that all of these descriptions are at an abstract level. The purpose is to give the reader a broad general understanding of the system or component. There may be many levels of design documents (e.g., system- or component-level). A technical specification describes the minute detail of either all or specific parts of a design, such as: the signature of an interface, including all data types/structures required (input data types, output data types, exceptions); detailed class models including all methods, attributes, dependencies and associations; the specific algorithms that a component employs and how they work; and physical data models including attributes and types of each entity/data type.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179554", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60327/" ] }
179,572
I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light->type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' => array(sphere, cube, cylinder), 'other-type' => value, 'etc' => etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types ; columns: id , name ) some other way? Many thanks for any assistance / expertise offered.
The same question arises in most of the projects I work on. Usually, I do this: If the set of possible values is unlikely to change any time soon, I use class/interface constants or enums in the code and enumerable fields in the database. Example: state of publishing of blog entries: 'not published', 'under moderation', 'published', etc. Values will probably change, but changes will not affect program logic - config files. Example: list of "how did you find our website?" options for a dropdown list in online purchasing form. Values are likely to change frequently and/or meant to be edited by non-developers, but yet these changes will not affect the logic - a database or at least a key-value storage with some user-friendly interface for editing. Changing the values will affect the logic - probably the system needs redesign (often true) or some business rules engine is needed. The most difficult case I've seen so far was psychological tests constructor my colleague worked on. Each type of test may have its own scoring system which may vary from simple addition to multiple characteristics scales with positive and negative values or even human evaluation of answers. After some discussion about this project, we ended up using Lua as a scripting engine, which totally conflicts with the ability of non-developers to create new tests (even though Lua is a relatively simple language you shouldn't expect that a non-programmer will learn it). About the quote from TPP. I think it's true for pristine code, but in real life, you better start simple ( KISS principle ) and add features later if they're really needed ( YAGNI ).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179572", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75162/" ] }
179,601
I want to ask – I am slowly learning jQuery. What I see is an exact example of a God Object anti-pattern . Basically, everything goes to the $ function, whatever it is. Am I right and is jQuery really an example of this anti-pattern?
To answer that question, I'm going to ask you a rhetorical question about another structure that have similar property to the DOM elements that jQuery manipulates, that is the good old iterator. The question is: How many operation do you need on a simple iterator? The question can be answered easily by looking at any Iterator API in a given language. You need 3 methods: Get the current value Move the iterator to the next element Check if the Iterator has more elements That's all you need. If you can perform those 3 operations, you can go trough any sequence of elements. But that is not only what you usually want to do with a sequence of elements, is it? You usually have a much higher level goal to achieve. You may want to do something with every element, you may want to filter them according to some condition, or one of several other methods. See the IEnumerable interface in the LINQ library in .NET for more examples. Do you see how many there are? And that is just a subset of all the methods they could have put on the IEnumerable interface, because you usually combine them to achieve even higher goals. But here is the twist. Those methods are not on the IEnumerable interface. They are simple utility methods that actually take a IEnumerable as input and do something with it. So while in the C# language it feels like there are a bajillion methods on the IEnumerable interface, IEnumerable is not a god object. Now back to jQuery. Lets ask that question again, this time with a DOM element. How many operation do you need on a DOM element? Again the answer is pretty straightforward. All the methods you need are methods to read/modify the attributes and the child elements. That's about it. Everything thing else is only a combination of those basic operations. But how much higher level stuff would you want to do with a DOM elements? Well, same as an Iterator: a bajillion different things. And that's where jQuery comes in. jQuery, in essence provide two things: A very nice collections of utilities methods that you may want to call on a DOM element, and; Syntactic sugar so that using it is a much better experience than using the standard DOM API. If you take out the sugared form, you realise that jQuery could easily have been written as a bunch of functions that select/modify DOM elements. For example: $("#body").html("<p>hello</p>"); ...could have been written as: html($("#body"), "<p>hello</p>"); Semantically it's the exact same thing. However the first form has the big advantage that the order left-to-right of the statements follow the order the operations will be executed. The second start in the middle, which makes for very hard to read code if you combine lots of operations together. So what does it all mean? That jQuery (like LINQ) is not the God object anti-pattern. It's instead a case of a very respected pattern called the Decorator . But then again, what about the override of $ to do all those different things? Well, that is just syntactic sugar really. All the calls to $ and its derivatives like $.getJson() are completely different things that just happen to share similar names so that you can immediately feel that they belong to jQuery. $ performs one and only one task: let you have an easily recognizable starting point to use jQuery. And all those methods that you can call on a jQuery object are not a symptom of a god object. They are simply different utility functions that each perform one and only thing on a DOM element passed as an argument. The .dot notation is only here because it make writing code easier.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179601", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75191/" ] }
179,609
I've recently been delving into functional programming especially Haskell and F#, the prior more so. After some googling around I could not find a benchmark comparison of the more prominent functional languages (Scala,F# etc). I know it's not necessarily fair to some of the languages (Scala comes to mind) given that they are hybrids, but I just wanna know which outperforms which on what operations and overall.
According the Great Benchmarks Game , ATS is faster than the rest with Haskell, Scala, and one of the variants of Common Lisp in a rough tie for speed close behind that. After that Ocaml and F# are in roughly the same speed category with Racket and Clojure lagging behind... However, almost none of this means anything at all really. It's all a question of problem, machine, compiler, coding techniques, and in some cases, plain luck. Generally speaking, Directly machine coded languages like Haskell will outperform VM compiled languages like F# and vastly outperform purely interpreted languages. Also generally, Statically typed languages are faster than Dynamically typed due to static analysis allowing all type operations to be calculated at compile rather than run time. Again, these are general rules, there will always be exceptions. "Paradigms" have little to do with it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179609", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75202/" ] }
179,616
I had read an interview with a great programmer (it is not in English) and in it he said that "a great programmer can be as 10 times as good as a mediocre one" giving reason for why good programmers are very well paid and why programming companies give many facilities for their employees. The idea was that there is a very large demand for good programmers, because of the above reason and that's why companies pay very much to bring them. Do you agree with this statement? Do you know any objective facts that could support it? Edit: The question has nothing to do with experience; if you talk about one great programmer with 1 year experience then s/he should be 10 times more productive than a mediocre programmer with 1 year experience. I agree that from certain experience years onwards, things start to dissipate but that's not the purpose of the question.
A genuinely terrible programmer can have sub-zero productivity (the bugs they introduce take longer to fix than it would take to just do all of their work for them). And a genuinely great programmer can do things that poor and average programmers would simply never achieve, regardless of how much time you gave them. So for these reasons, it's hard to talk about "10x as productive" or "100x as productive". The thing to remember, though, is that most employers of programmers do not have any need for them to do the difficult tasks that average programmers could not manage. Most code being written is websites, line of business apps, intranet apps, etc., much of it really not that difficult. The productive programmer in that environment is the one who is best at understanding and implementing the users' needs, not the one who can write the cleverest code. Indeed, most employers of programmers would be better off with a good programmer rather than a great one, because the great one will just get bored and leave. Gotta find a good match between programmers and jobs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179616", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57792/" ] }
179,682
I've come to notice that a lot of software that companies use for managing things like time, expenses, setting up phone systems, etc is very non-intuitive from a user experience point of view. I know personally I waste a lot of time just trying to figure out how to navigate these systems, especially if I don't have a co-worker close by who I can bug to help me out. The help files are usually just as bad as the user interface itself. Are companies that complacent or are there just not any comparable enterprise products out there which do the job for these sorts of tasks? It seems that on the consumer side there is plenty of market opportunity for creating better user experiences, but how about for enterprise software? Obviously a certain level of slickness is not going to matter to a company, but when a better UX design translates to time saved, it's hard to argue against that. Edit: I'm not referring to in-house applications, but rather off the shelf systems from large software companies.
It matters a lot. Good UX increases productivity. If UX is good, the company can focus on "how to do their stuff" instead of "how to use your software to do their stuff", and it takes less time to teach new workers. And, good UX will drastically decrease the number of support tickets, so you can spend more time on resolving serious issues rather than "how to use" issues.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179682", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55327/" ] }
179,746
I am currently working on a rather large project, and I have used JUnit and EasyMock to fairly extensively unit test functionality. I am now interested in what other types of testing I should worry about. As a developer is it my responsibility to worry about things like functional, or regression testing? Is there a good way to integrate these in a useable way in tools such as Maven/Ant/Gradle? Are these better suited for a Tester or BA? Are there other useful types of testing that I am missing?
It is your responsibility to strive to deliver defect-free code. You should write, help write, or ensure tests get written or performed in order to give you confidence in the code you are delivering. Note: I'm not saying you are required to deliver defect-free code. Rather, you should attempt to write the best code you can for the requirements you were given. Part of being able to do that means the code should be tested. Whether that means you are personally responsible for functional and regression tests is mostly a function of how your company is organized. All of the highest skilled programmers I know don't ask themselves "is it my responsibility to write tests of type X?". Instead, they ask themselves "what must I do to make sure my code is properly tested?". The answer might be to write unit tests, or to add tests to the regression, or it might mean to talk to a QA professional and help them understand what tests need to be written. In all cases, however, it means that they care enough about the code they are writing to make sure it is properly tested. Bottom line: you should be responsible for delivering high quality code. If that means you need to write some functional or regression tests, do it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179746", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75303/" ] }
179,863
I have read recently some articles (e.g. http://dailyjs.com/2012/09/14/functional-programming/ ) about the functional aspects of Javascript and the relationship between Scheme and Javascript (the latter was influenced by the first, which is a functional language, while the O-O aspects are inherited from Self which is a prototyping-based language). However my question is more specific: I was wondering if there are metrics about the performance of recursion vs. iteration in Javascript. I know that in some languages (where by design iteration performs better) the difference is minimal because the interpreter / compiler converts recursion into iteration, however I guess that probably this is not the case of Javascript since it is, at least partially, a functional language.
JavaScript does not perform tail recursion optimization , so if your recursion is too deep, you may get a call stack overflow. Iteration doesn't have such issues. If you think you are going to recurse too much, and you really need recursion (for example, to do flood-fill), replace the recursion with your own stack. Recursion performance is probably worse than iteration performance, because function calls and returns require state preservation and restoration, while iteration simply jumps to another point in a function.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179863", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61033/" ] }
179,924
(This is mainly aimed at those who have specific knowledge of low latency systems, to avoid people just answering with unsubstantiated opinions). Do you feel there is a trade-off between writing "nice" object orientated code and writing very fast low latency code? For instance, avoiding virtual functions in C++/the overhead of polymorphism etc- re-writing code which looks nasty, but is very fast etc? It stands to reason- who cares if it looks ugly (so long as its maintainable)- if you need speed, you need speed? I would be interested to hear from people who have worked in such areas.
Do you feel there is a trade-off between writing "nice" object orientated code and writing very [sic] low latency code? Yes. That's why the phrase "premature optimization" exists. It exists to force developers to measure their performance, and only optimize that code that will make a difference in performance, while sensibly designing their application architecture from the start so that it doesn't fall down under heavy load. That way, to the maximum extent possible, you get to keep your pretty, well-architected, object-oriented code, and only optimize with ugly code those small portions that matter.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179924", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41217/" ] }
179,931
What is a better approach when it comes to performance and optimal resource utilization: accessing a database multiple times through AJAX to only get the exact information needed when it is needed, or performing one access to retrieve an object that holds all information that might be needed, with a high probability that not all is actually needed? I know how to benchmark the actual queries, but I don't know how to test what is best when it comes to database performance when thousands of users are accessing the database simultaneously and how connection pooling comes into play.
There is no one correct answer to this; like any optimization it depends heavily on context / usage. However, consider the following as a rule of thumb: x +: Data is stable / static -: Data is dynamic / volatile y +: Data is frequently used -: Data is infrequently used ++: fetch large chunks in the fewest number of fetches and persist the data as long as possible within tolerances for staleness. +-: do what is expedient to the logic & usage; if it is convenient to fetch / calc as needed do so, if it is convenient to pre-fetch and persist then do so. Seek to optimize only if absolutely necessary. -+: fetch / calc as needed; but if optimization is required consider pre-fetching or pre-calculating if possible, or negotiate a tolerance for less than real time accuracy to reduce volatility. --: fetch / calc as needed and don't worry about it further unless a specific case is unacceptably expensive; if so see -+.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179931", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75344/" ] }
179,965
I have been interested in some of the concepts of functional programming lately. I have used OOP for some time now. I can see how I would build a fairly complex app in OOP. Each object would know how to do things that object does. Or anything it's parents class does as well. So I can simply tell Person().speak() to make the person talk. But how do I do similar things in functional programming? I see how functions are first class items. But that function only does one specific thing. Would I simply have a say() method floating around and call it with an equivalent of Person() argument so I know what kind of thing is saying something? So I can see the simple things, just how would I do the comparable of OOP and objects in functional programming, so I can modularize and organize my code base? For reference, my primary experience with OOP is Python, PHP, and some C#. The languages that I am looking at that have functional features are Scala and Haskell. Though I am leaning towards Scala. Basic Example (Python): Animal(object): def say(self, what): print(what) Dog(Animal): def say(self, what): super().say('dog barks: {0}'.format(what)) Cat(Animal): def say(self, what): super().say('cat meows: {0}'.format(what)) dog = Dog() cat = Cat() dog.say('ruff') cat.say('purr')
What you are really asking about here is how to do Polymorphism in functional languages, i.e. how to create functions that behave differently based on their arguments. Note the first argument to a function is typically equivalent to the "object" in OOP, but in functional languages you usually want to separate functions from data, so the "object" is likely to be a pure (immutable) data value. Functional languages in general provide various options for achieving polymorphism: Something like multimethods which call a different function based on examining the arguments provided. This can be done on the type of the first argument (which is effectively equal to the behaviour of most OOP languages), but could also be done on other attributes of the arguments. Prototype / object-like data structures which contain first-class functions as members . So you could embed a "say" function inside your dog and cat data structures. Effectively you have made the code part of the data. Pattern matching - where pattern matching logic is built into the function definition, and ensures different behaviours for different parameters. Common in Haskell. Branching / conditions - equivalent to if / else clauses in OOP. Might not be highly extensible, but can still be appropriate in many cases when you have a limited set of possible values (e.g. was the function passed a number or a string or null?) As an example, here's a Clojure implementation of your problem using multimethods: ;; define a multimethod, that dispatched on the ":type" keyword (defmulti say :type) ;; define specific methods for each possible value of :type. You can add more later (defmethod say :cat [animal what] (println (str "Car purrs: " what))) (defmethod say :dog [animal what] (println (str "Dog barks: " what))) (defmethod say :default [animal what] (println (str "Unknown noise: " what))) (say {:type :dog} "ruff") => Dog barks: ruff (say {:type :ape} "ook") => Unknown noise: ook Note that this behaviour doesn't require any explicit classes to be defined: regular maps work fine. The dispatch function (:type in this case) could be any arbitrary function of the arguments.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179965", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10004/" ] }
179,982
I often encounter the following statements / arguments: Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). I am not an expert in functional languages but here is what I have understood about these topics until now. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces side effects (e.g. in Haskell by means of monadic types). Also, as far as I know computing by side effects (destructively updating data) should also be possible (using monadic types?) even though it is not the preferred way of working. Regarding point 2, as far as I know you can represent state by threading values through several computation steps (in Haskell, again, using monadic types) but I have no practical experience doing this and my understanding is rather vague. So, are the two statements above correct in any sense or are they just misconceptions about purely functional languages? If they are misconceptions, how did they come about? Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state?
For the purposes of this answer I define "purely functional language" to mean a functional language in which functions are referentially transparent, i.e. calling the same function multiple times with the same arguments will always produce the same results. This is, I believe, the usual definition of a purely functional language. Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). The easiest way to achieve referential transparency would indeed be to disallow side effects and there are indeed languages in which that is the case (mostly domain specific ones). However it is certainly not the only way and most general purpose purely functional languages (Haskell, Clean, ...) do allow side effect. Also saying that a programming language without side effects is little use in practice isn't really fair I think - certainly not for domain specific languages, but even for general purpose languages, I'd imagine a language can be quite useful without providing side effects. Maybe not for console applications, but I think GUI applications can be nicely implemented without side-effects in, say, the functional reactive paradigm. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces them (e.g. in Haskell by means of monadic types). That's a bit over simplifying it. Just having a system where side-effecting functions need to be marked as such (similar to const-correctness in C++, but with general side-effects) is not enough to ensure referential transparency. You need to ensure that a program can never call a function multiple times with the same arguments and get different results. You could either do that by making things like readLine be something that's not a function (that's what Haskell does with the IO monad) or you could make it impossible to call side-effecting functions multiple times with the same argument (that's what Clean does). In the latter case the compiler would ensure that every time you call a side-effecting function, you do so with a fresh argument, and it would reject any program where you pass the same argument to a side-effecting function twice. Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). Again, a purely functional language might very well disallow mutable state, but it's certainly possible to be pure and still have mutable state, if you implement it in the same way as I described with side-effects above. Really mutable state is just another form of side-effects. That said, functional programming languages definitely do discourage mutable state - pure ones especially so. And I don't think that that makes programming awkward - quite the opposite. Sometimes (but not all that often) mutable state can't be avoided without losing performance or clarity (which is why languages like Haskell do have facilities for mutable state), but most often it can. If they are misconceptions, how did they come about? I think many people simply read "a function must produce the same result when called with the same arguments" and conclude from that that it's not possible to implement something like readLine or code that maintains mutable state. So they're simply not aware of the "cheats" that purely functional languages can use to introduce these things without breaking referential transparency. Also mutable state is heavily discourages in functional languages, so it isn't all that much of a leap to assume it's not allowed at all in purely functional ones. Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state? Here's an application in Pseudo-Haskell that asks the user for a name and greets him. Pseudo-Haskell is a language that I just invented, which has Haskell's IO system, but uses more conventional syntax, more descriptive function names and has no do -notation (as that would just distract from how exactly the IO monad works): greet(name) = print("Hello, " ++ name ++ "!") main = composeMonad(readLine, greet) The clue here is that readLine is a value of type IO<String> and composeMonad is a function that takes an argument of type IO<T> (for some type T ) and another argument that is a function which takes an argument of type T and returns a value of type IO<U> (for some type U ). print is a function that takes a string and returns a value of type IO<void> . A value of type IO<A> is a value that "encodes" a given action that produces a value of type A . composeMonad(m, f) produces a new IO value that encodes the action of m followed by the action of f(x) , where x is the value produces by performing the action of m . Mutable state would look like this: counter = mutableVariable(0) increaseCounter(cnt) = setIncreasedValue(oldValue) = setValue(cnt, oldValue + 1) composeMonad(getValue(cnt), setIncreasedValue) printCounter(cnt) = composeMonad( getValue(cnt), print ) main = composeVoidMonad( increaseCounter(counter), printCounter(counter) ) Here mutableVariable is a function that takes value of any type T and produces a MutableVariable<T> . The function getValue takes MutableVariable and returns an IO<T> that produces its current value. setValue takes a MutableVariable<T> and a T and returns an IO<void> that sets the value. composeVoidMonad is the same as composeMonad except that the first argument is an IO that does not produce a sensible value and the second argument is another monad, not a function that returns a monad. In Haskell there's some syntactic sugar, that makes this whole ordeal less painful, but it's still obvious that mutable state is something that language doesn't really want you to do.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/179982", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29020/" ] }
180,012
Possible Duplicate: Writing Web “server less” applications So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app
Doing as you suggest creates a tight(er) coupling between your client side language and your database. That can be okay - there's less code to write and maintain, and in theory debugging could / should go a little quicker. On the other hand, it makes other aspects more difficult. If / when you need to change either of those technologies, you'll have a harder time because of the tight coupling between them. Protecting yourself against attacks will be (quite) a bit more difficult. You're assuming that the client will always present nicely formatted requests to the database. That presumes no one will ever hack the client side code to insert malicious statements. In other words, they'll "borrow" your authentication mechanisms and replace the normal client code with theirs. I wouldn't recommend it, and many would vehemently tell you not to. But it can be done.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180012", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75474/" ] }
180,050
I am an experienced Java programmer, and I want to create a complex web application requiring dynamic pages, drawings, etc (take SO as an example). Do I have to learn javascript/html in order to create such an application? It is not that I don't want to learn another language (I've done this before), but technology on the javascript environment seems to change so fast that when you finish learning one framework it is already obsolete. I have checked a number of java framework for web development (spring, play), but not deeply. So can these frameworks (or other possible java frameworks that I'm not aware of) be used without learning html/javascript? I also have some python experience. So if I can do the app in python it is also an option.
You don't have to learn JavaScript and HTML to create web applications. But you will. If you really want to write webapps in mostly Java, have a look at the Google Web Toolkit , which does vast amounts of Java to JS, and can satisfy a good chunk of the code needed for a webapp. Django is a similar framework for Python. And if you really want to avoid writing HTML there are vast amounts of templates and What-you-see-is-what-you-get editors out there. But you see, regardless of the abstraction framework and HMTL templates you start with, at some point you'll be dissatisfied with the presentation. And so you'll get enough HTML/JS on your hands to change the one tiny little thing you want. And another thing. And another. And then one day you'll wake up in a cold sweat. And that's how you'll learn. That's how a lot of us learned, back in the era of point-and-click website makers like Geocities. After a while, if you're serious about the web, you'll learn the languages of the web, intentionally or not. So you don't have to learn HTML and JavaScript to make a site like StackOverflow. But if you really try and make a site like StackOverflow, you won't be able to stop yourself from learning them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180050", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35897/" ] }
180,068
I discovered some time ago that the GOTO control keyword was introduced in PHP 5.3.0. http://php.net/manual/en/control-structures.goto.php Why did it happen? What are the language design goals behind this? Did the PHP developers community asked for it?
The person responsible gives the reason for adding it in this blog post . [T]he project I was working on (and may pick back up someday) was a workflow based site builder […] for making dynamic websites without knowing a programming language. You build up all the meta-information about how your site should run, then the site-builder “compiles” that into PHP code that’ll run on the actual server. It can be done without GOTO, but the resulting logic for the particularly complex bits of business logic become much simpler with it. In PHP's defense, it only lets you jump to labels within a limited scope, which limits the harm that can be done. (You might also argue that given the way PHP has apparently absorbed every construct from every programming language ever invented, the addition of goto was simply inevitable)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180068", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61852/" ] }
180,108
I am new to web crawling and I am testing my crawlers. I have been doings tests on various sites for testing. I forgot about robots.txt file during my tests. I just want to know what will happen if I don't follow the robots.txt file and what is the safe way of doing the crawling?
The Robot Exclusion Standard is purely advisory, it's completely up to you if you follow it or not, and if you aren't doing something nasty chances are that nothing will happen if you choose to ignore it. That said, when I catch crawlers not respecting robot.txt in the various websites I support, I go out of my way to block them, regardless of whether they are troublesome or not. Even legit crawlers may bring a site to a halt with too many requests to resources that aren't designed to handle crawling, I'd strongly advise you to reconsider and adjust your crawler to fully respect robots.txt.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180108", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75552/" ] }
180,125
Is else while without intervening braces considered "safe" maintenance wise? Writing if-else code without braces like below... if (blah) foo(); else bar(); ...carries a risk because the lack of braces make it very easy to change the meaning of the code inadvertently. However, is below also risky? if (blah) { ... } else while (!bloop()) { bar(); } Or is else while without intervening braces considered "safe"?
That reminds me this code: if ( ... ) try { .. } catch (Exception e) { .. } else { ... } Every time you are combining two types of blocks, forgetting braces and not increasing indentation, you are creating code very difficult to understand and maintain.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180125", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11833/" ] }
180,165
How do people define the minimum hardware requirements for software? For example: how can a software development company tell the customer that they will need 8 GB of RAM to run the program properly?
First off, not all requirements are hard requirements, but rather the minimum supported hardware. If someone has less than the minimum, it may run - but not optimally, or it may not run at all. In either case, its not a supported system and the problems you have are your own. The simplest way to get hardware requirements is guess. The developer looks at their machine and says "Yep, it runs on mine, that’s the requirements." In a more rigorous environment, the development company has a suite of test systems. It may not be in house (non in house apple developers occasionally use the Apple Compatibility Lab ). As part of the testing process, one tests on all the hardware available and determines the minimum requirements for it to run. Another factor in hardware requirements is the base requirements for the operating system. In theory, Windows 7 requires a minimum of 1GB of ram to run. So testing against a 512 MB system running Windows 7 is nonsensical. Test the system running with 1 GB of ram. Does it work? Nope... upgrade the ram. Repeat the test and upgrades until the application works in a supportable way and list those as the minimum requirements. When performance becomes part of the promise of the software the 'supportable' includes that in addition to actually running, that the operation meets the minimum performance expectation.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180165", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75596/" ] }
180,216
I saw a conference by Herb Sutter where he encourages every C++ programmer to use auto . I had to read C# code some time ago where var was extensively used and the code was very hard to understand—every time var was used I had to check the return type of the right side. Sometimes more than once, because I forgot the type of the variable after a while! I know the compiler knows the type and I don’t have to write it, but it is widely accepted that we should write code for programmers, not for compilers. I also know that is more easy to write: auto x = GetX(); Than: someWeirdTemplate<someOtherVeryLongNameType, ...>::someOtherLongType x = GetX(); But this is written only once and the GetX() return type is checked many times to understand what type x has. This made me wonder—does auto make C++ code harder to understand?
Short answer: More completely, my current opinion on auto is that you should use auto by default unless you explicitly want a conversion. (Slightly more precisely, "... unless you want to explicitly commit to a type, which nearly always is because you want a conversion.") Longer answer and rationale: Write an explicit type (rather than auto ) only when you really want to explicitly commit to a type, which nearly always means you want to explicitly get a conversion to that type. Off the top of my head, I recall two main cases: (Common) The initializer_list surprise that auto x = { 1 }; deduces initializer_list . If you don’t want initializer_list , say the type -- i.e., explicitly ask for a conversion. (Rare) The expression templates case, such as that auto x = matrix1 * matrix 2 + matrix3; captures a helper or proxy type not meant to be visible to the programmer. In many cases, it's fine and benign to capture that type, but sometimes if you really want it to collapse and do the computation then say the type -- i.e., again explicitly ask for a conversion. Routinely use auto by default otherwise, because using auto avoids pitfalls and makes your code more correct, more maintainable and robust, and more efficient. Roughly in order from most to least important, in the spirit of "write for clarity and correctness first": Correctness: Using auto guarantees you’ll get the right type. As the saying goes, if you repeat yourself (say the type redundantly), you can and will lie (get it wrong). Here's a usual example: void f( const vector<int>& v ) { for( /*…* -- at this point, if you write the iterator’s type explicitly, you want to remember to write const_iterator (did you?), whereas auto just gets it right. Maintainability and robustness: Using auto makes your code more robust in the face of change, because when the expression's type changes, auto will continue to resolve to the correct type. If you instead commit to an explicit type, changing the expression's type will inject silent conversions when the new type converts to the old type, or needless build breaks when the new type still works-like the old type but doesn't convert to the old type (for example, when you change a map to an unordered_map , which is always fine if you aren't relying on order, using auto for your iterators you'll seamlessly switch from map<>::iterator to unordered_map<>::iterator , but using map<>::iterator everywhere explicitly means you'll be wasting your valuable time on a mechanical code fix ripple, unless an intern is walking by and you can foist off the boring work on them). Performance: Because auto guarantees no implicit conversion will happen, it guarantees better performance by default. If instead you say the type, and it requires a conversion, you will often silently get a conversion whether you expected it or not. Usability: Using auto is your only good option for hard-to-spell and unutterable types, such as lambdas and template helpers, short of resorting to repetitive decltype expressions or less-efficient indirections like std::function . Convenience: And, yes, auto is less typing. I mention that last for completeness because it's a common reason to like it, but it's not the biggest reason to use it. Hence: Prefer to say auto by default. It offers so much simplicity and performance and clarity goodness that you're only hurting yourself (and your code's future maintainers) if you don't. Only commit to an explicit type when you really mean it, which nearly always means you want an explicit conversion. Yes, there is (now) a GotW about this.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180216", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13456/" ] }
180,353
I've noticed that a lot of companies use "reverse domain name" namespaces and I'm curious where that practice originated and why it continues. Does it merely continue because of rote practice, or is there an outstanding architecture concept I might be missing here? Also note questions such as: https://stackoverflow.com/questions/189209/do-you-really-use-your-reverse-domain-for-package-naming-in-java that sort of answer my question but not 100% (If it makes you feel any better, I'm really curious if I should be using it for my javascript namespacing efforts, but I'm more curious about then when and why, and that should help guide me on the javascript answer, nota bene: "window") Example of this practice extending to folders and files:
Reverse Domain Notation has its origins in Java, but is widely used in many platforms, such as Android Packages, Mac OS X Packages, JavaScript, ActionScript, and more. The practice is extremely useful because it provides a decentralized system for namespacing software. There is no need to apply to a centralized agency for a namespace; simply use the domain name you own (reversed) and manage that within your own organization. By naming packages like this, one can be almost certain that code won't conflict with other packages. From Oracle's Java Tutorials : Companies use their reversed Internet domain name to begin their package names for example, com.example.mypackage for a package named mypackage created by a programmer at example.com. Name collisions that occur within a single company need to be handled by convention within that company, perhaps by including the region or the project name after the company name (for example, com.example.region.mypackage). It's more than a rote practice, it's good practice because it's a complete and fully specific namespace. If there were two companies named Acme and both chose the namespace acme. , their code would conflict. But only one of those companies can own the domain acme.com , so they get to use the com.acme. namespace. Reversing the domain name allows for a top-down architecture. com would contain code for companies (or anyone who owns own a .com domain name), and underneath that would company (domain) names. Then, deeper within that would be the structure of the organization and/or the actual namespace. (For example, if it was code from a network called internal.acme.com , that gives this department their own sub-namespace of com.acme .) This top-down structure is used in a number of applications, including in systems administration. (It's similar to reverse IP address lookups.) Personally, I use it for all new JavaScript code I write for my company. It ensures that the code will never conflict with any other code, even if I later write the same code for another company. It can make accessing the code cumbersome (typing com.digitalfruition. can get a bit much) but that can easily be worked around with a closure and a local variable ( var DF = com.digitalfruition ).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180353", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2356/" ] }
180,414
I need to write a cross-platform GUI application to process (in multiple threads) and visualize fairly large quantities of data. Ideally the application should be relatively fast and look good. The app's interface will consist of a table widget, a tree widget, and a custom figure-drawing widget. The user will be able to modify the data from any one of these widgets and the changes should be immediately reflected in the other widgets. Naturally, I am planning to use MVC. However, I normally do all my GUI programming in C++/Qt, and have very limited exposure to Java. So I would really appreciate any advice on how to organize such an app in Java. In particular, should I use Swing or JavaFX? Which widgets would you pick for the job? Could you recommend any books/online tutorials that cover these aspects of the Java platform? I will greatly appreciate any feedback. Thanks! (this question was originally posted on Stack Overflow , but this site was suggested as a more appropriate place to ask it)
This is all very subjective as multiple different technologies will be able to serve your needs. Like others, I'll probably just recommend the technology I'd personally prefer for such a project, which would be JavaFX . Reasons I would recommend JavaFX are: In-built properties and binding make it easy to wire up parts of your GUI for automatic updates. Quality graphical design tool . Separation of UI component layout from control logic using FXML . High quality, hardware accelerated, platform independent rendering architecture . Wide variety of supported platforms and systems . Support for touch interaction . Separation of style from code using CSS . SceneGraph based retained rendering model rather than a direct draw model. Quality documentation and support forums plus help available via StackOverflow and a web blogging community . Wide variety of quality pre-built controls . Media support . Modern, performant HTML5 engine . Embeddable in Swing applications if needed, so you can still make use of Swing based frameworks . Embeddable in SWT applications if needed, so you can still make us of SWT based frameworks. Ability to support immediate mode rendering if required. Effects framework . Animation framework . Graphing framework . Open source development . Excellent sample program . Public issue tracker and responsive developers , doing active, on-going development so that new features and bugs get addressed in a timely way. Backing of a major company prepared to invest considerable resources to make the project successful. Deployable via native platform installers and executables . Able to utilize all of the features of the massive JDK and java based libraries and projects . Can be used from a variety of programming languages, such as Scala , Groovy and Ruby . Nice interactive development and debugging utilities as well as the ability to use a standard Java debugger to debug your program. Quality layout managers . JavaFX has been designed to take advantage of Java 8 lambdas . JavaFX applications are deployable to commercial app stores . JavaFX can be programmed using the best IDEs in the world . Strong concurrency utilities . OK, I guess the above is a bit of a tangent, but I like to promote JavaFX ;-) Anyway, to address the specific points in your question: JavaFX is a cross-platform GUI framework (currently Mac/Windows/Linux). JavaFX has inbuilt, high quality multiple threading support (the core framework itself is single-threaded as is just about every other GUI platform out there, but you can define your own tasks to be executed concurrently off the GUI thread so that the GUI thread remains responsive). A well written JavaFX program should have no issues performing well enough to visualize fairly large quantities of data. Here is a sample JavaFX project which does that. The ability to style the application via css, make use of visual effects and animation, allows you to work with designers to make your app look great or do it yourself if you have the skills. JavaFX has a TableView and a TreeView you can use. There is also a combined TreeTable control which has been developed in preparation for the Java 8 release. The custom figure drawing widget could use scene graph shapes or a direct draw canvas - it's really just a matter of coding style and preference. JavaFX's properties and binding facilities coupled with it's event listener frameworks makes it trivial to modify data using one control and have the changes immediately reflected in other controls. For MVC style development, you can write your model using plain java objects, define your view using the FXML markup language (created using the SceneBuilder graphical layout tool if desired) and define your control logic in Java (or another scripting language if you prefer), plus you should separate your style from your logic by using CSS. As you have limited exposure to Java, the learning curve will be significant. There are excellent Java tutorials in addition to the JavaFX tutorials I linked to earlier which can help with this. JavaFX core code uses only facilities available in the JDK, so to use JavaFX you don't need to learn a lot of additional libraries and frameworks (as you would if you were learning JavaEE for instance). So those two tuotorial sites provide most information you need to get up to speed. For app organization, if you have a complex application and need the backing of a complete, proven client application framework and don't mind doing a lot of extra learning, then you can embed JavaFX in the Eclipse RCP or NetBeans RCP , which, according to their pages could save you years of development time. I'd only advise looking at such platforms if your needs justify it though, because for small to medium size projects, it will probably be simpler to code the stuff up directly in JavaFX without the complex client platform frameworks such as Eclipse and NetBeans. As to whether you should use Swing or JavaFX, I'm not really the most objective person to answer that. JavaFX definitely has shortcomings which would become apparent to you as you start to use it (as does Swing). For online tutorials, I've linked to most relevant ones. There is a nice tutorial set for a complete app . The books Pro JavaFX 2 and JavaFX 2.0 Introduction by Example are highly rated.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180414", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75759/" ] }
180,501
Look at this: <?php echo "Hello World"; ?> <br /> <?php echo "Welcome"; ?> And now look at this: <?php echo "Hello World"; echo "<br />"; echo "Welcome"; ?> Which one of the above examples is considered to better (at least from performance point of view)?. I know the examples are trivial but I want to follow good practice habits from the beginning so when I have more and more lines they would not affect the performance or whatever in a negative way.
As comment points out, it should be done by templates. But if only your two choice is possible. Case 1 will be better. This is the same concept as android inflating layout by xml , or programmaticlly making the UI. Your <br/> is a static content from your html and all your server will be doing is to show it. But if you use php, to execute it, this will be using your servers resources and be computing. This is how it should be done: index.php: <?php include 'header.html'; ... include 'footer.html'; ?> header.html and footer.html are templates.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180501", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75799/" ] }
180,555
In functional programming since almost all data structure are immutable, when the state has to change a new structure is created. Does this mean a lot more memory usage? I know the object oriented programming paradigm well, now I'm trying to learn about the functional programming paradigm. The concept of everything being immutable confuses me. It would seem like the a program using immutable structures would require much more memory than a program with mutable structures. Am I even looking at this in the right way?
The only correct answer to this is "sometimes". There are a lot of tricks that functional languages can use to avoid wasting memory. Immutability makes it easier to share data between functions, and even between data structures, since the compiler can guarantee that the data won't be modified. Functional languages tend to encourage the use of data structures that can be used efficiently as immutable structures (for instance, trees instead of hash tables). If you add laziness into the mix, like many functional languages do, that adds new ways to save memory (it also adds new ways of wasting memory, but I'm not going to go into that).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180555", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52216/" ] }
180,569
I'm a beginner-level C++ programmer, but I understand the concepts of the language fairly well. When I began to learn external C++ libraries, like SDL, OpenGL (maybe something else too), to my great surprise I found out that they don't use C++ concepts at all. For example, neither SDL, nor OpenGL use classes or exceptions, preferring functions and error codes. In OpenGL I've seen functions like glVertex2f, which takes 2 float variables as an input and probably would be better as a template. Moreover, these libraries sometimes use marcos, while it seems to be a common agreement that using macroses is bad. All in all, they seem to be written more in C style, than in C++ style. But they are completely different incompaitable languages, aren't they? The question is: why modern libraries do not use the advantages of the language they are written in?
Both OpenGL and SDL are C libraries and expose a C interface to the rest of the world (as pretty much every language out there can interface with C but not necessarily with C++). Thus, they're restricted to the procedural interface that C gives you and the C way of declaring and using data structures. Over and above the "interfacing with other languages" aspect that a C interface offers you, C in general tends to be a bit more portable than C++, which in turn makes it easier to get the non-platform dependent part of the code of libraries like these working on another OS or hardware architecture. Pretty much every platform out there has a decent C compiler, but there are still some that have restricted C++ compilers or ones that are just not very good. While C and C++ are very different languages, they are not "incompatible", in fact, to a large extent C++ is a superset of C. There are some incompatibilities but not many, so using a C interface from C++ is a very easy thing to do.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180569", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74840/" ] }
180,585
Javascript is a prototype-based object oriented language but can become class-based in a variety of ways, either by: Writing the functions to be used as classes by yourself Use a nifty class system in a framework (such as mootools Class.Class ) Generate it from Coffeescript In the beginning I tended do write class based code in Javascript and heavily relied on it. Recently however I've been using Javascript frameworks, and NodeJS , that go away from this notion of classes and rely more on the dynamic nature of the code such as: Async programming, using and writing writing code that uses callbacks/events Loading modules with RequireJS (so that they don't leak to the global namespace) Functional programming concepts such as list comprehensions (map, filter, etc.) Among other things What I gathered so far is that most OO principles and patterns I've read (such as SOLID and GoF patterns) were written for class-based OO languages in mind like Smalltalk and C++. But are there any of them applicable for a prototype-based language such as Javascript? Are there any principles or patterns that are just specific to Javascript? Principles to avoid callback hell , evil eval , or any other anti-patterns etc.
After many edits, this answer has become a monster in length. I apologize in advance. First of all, eval() isn't always bad, and can bring benefit in performance when used in lazy-evaluation, for example. Lazy-evaluation is similar to lazy-loading, but you essentially store your code within strings, and then use eval or new Function to evaluate the code. If you use some tricks, then it'll become much more useful than evil, but if you don't, it can lead to bad things. You can look at my module system that uses this pattern: https://github.com/TheHydroImpulse/resolve.js . Resolve.js uses eval instead of new Function primarily to model the CommonJS exports and module variables available in each module, and new Function wraps your code within an anonymous function, though, I do end up wrapping each module in a function I do it manually in combination with eval. You read more about it in the following two articles, the later also referring to the first. Lazy evaluation of CommonJS modules AMD is Not the Answer Harmony Generators Now that generators have finally landed in V8 and thus in Node.js, under a flag ( --harmony or --harmony-generators ). These greatly reduce the amount of callback hell you have. It makes writing asynchronous code truly great. The best way to utilize generators is to employ some sort of control-flow library. This will enable to flow to continue going as you yield within generators. Recap/Overview: If you're unfamiliar with generators, they're a practice of pausing the execution of special functions (called generators). This practice is called yielding using the yield keyword. Example: function* someGenerator() { yield []; // Pause the function and pass an empty array. } Thus, whenever you call this function the first time, it'll return a new generator instance. This allows you to call next() on that object to start or resume the generator. var gen = someGenerator(); gen.next(); // { value: Array[0], done: false } You would keep calling next until done returns true . This means the generator has completely finished it's execution, and there are no more yield statements. Control-Flow: As you can see, controlling generators are not automatic. You need to manually continue each one. That's why control-flow libraries like co are used. Example: var co = require('co'); co(function*() { yield query(); yield query2(); yield query3(); render(); }); This allows the possibility to write everything in Node (and the browser with Facebook's Regenerator which takes, as input, source code that utilize harmony generators and splits out fully compatible ES5 code) with a synchronous style. Generators are still pretty new, and thus requires Node.js >=v11.2. As I'm writing this, v0.11.x is still unstable and thus many native modules are broken and will be until v0.12, where the native API will calm down. To add to my original answer: I've recently been preferring a more functional API in JavaScript. The convention does use OOP behind the scenes when needed but it simplifies everything. Take for example a view system (client or server). view('home.welcome'); Is much easier to read or follow than: var views = {}; views['home.welcome'] = new View('home.welcome'); The view function simply checks if the same view already exists in a local map. If the view does not exist, it'll create a new view and add a new entry to the map. function view(name) { if (!name) // Throw an error if (view.views[name]) return view.views[name]; return view.views[name] = new View({ name: name }); } // Local Map view.views = {}; Extremely basic, right? I find it dramatically simplifies the public interface and makes it easier to use. I also employ chain-ability... view('home.welcome') .child('menus') .child('auth') Tower, a framework I'm developing (with someone else) or developing the next version (0.5.0) will use this functional approach in most of it's exposing interfaces. Some people take advantage of fibers as a way to avoid "callback hell". It's quite a different approach to JavaScript, and I'm not a huge fan of it, but many frameworks / platforms use it; including Meteor, as they treat Node.js as a thread/per connection platform. I'd rather use an abstracted method to avoid callback hell. It may become cumbersome, but it greatly simplifies the actual application code. When helping on building the TowerJS framework, it solved a lot of our problems, though, you'll obviously still have some level of callbacks, but the nesting isn't deep. // app/config/server/routes.js App.Router = Tower.Router.extend({ root: Tower.Route.extend({ route: '/', enter: function(context, next) { context.postsController.page(1).all(function(error, posts) { context.bootstrapData = {posts: posts}; next(); }); }, action: function(context, next) { context.response.render('index', context); next(); }, postRoutes: App.PostRoutes }) }); An example of our, currently being developed, routing system and "controllers", though fairly different from traditional "rails-like". But the example is extremely powerful and minimizes the amount of callbacks and makes things fairly apparent. The problem with this approach is that everything is abstracted. Nothing runs as-is, and requires a "framework" behind it. But if these kinds of features and coding style is implemented within a framework, then it's a huge win. For patterns in JavaScript, it honestly depends. Inheritance is only really useful when using CoffeeScript, Ember, or any "class" framework/infrastructure. When you're inside a "pure" JavaScript environment, using the traditional prototype interface works like a charm: function Controller() { this.resource = get('resource'); } Controller.prototype.index = function(req, res, next) { next(); }; Ember.js started, for me at least, using a different approach to constructing objects. Instead of constructing each prototype methods independently, you'd use a module-like interface. Ember.Controller.extend({ index: function() { this.hello = 123; }, constructor: function() { console.log(123); } }); All these are different "coding" styles, but do add to your code base. Polymorphism Polymorphism isn't widely used in pure JavaScript, where working with inheritance and copying the "class"-like model requires a lot of boilerplate code. Event/Component Based Design Event-based and Component-based models are the winners IMO, or the easiest to work with, especially when working with Node.js, which has a built-in EventEmitter component, though, implementing such emitters is trivial, it's just a nice addition. event.on("update", function(){ this.component.ship.velocity = 0; event.emit("change.ship.velocity"); }); Just an example, but it's a nice model to work with. Especially in a game/component oriented project. Component design is a separate concept by itself, but I think it works extremely well in combination to event systems. Games are traditionally known for component based design, where object oriented programming takes you only so far. Component based design has it's uses. It depends on what type of system your building. I'm sure it would work with web apps, but it'd work extremely well in a gaming environment, because of the number of objects, and separate systems, but other examples surely exist. Pub/Sub Pattern Event-binding and pub/sub is similar. The pub/sub pattern really shines in Node.js applications because of the unifying language, but it can work in any langauge. Works extremely well in real-time applications, games, etc.. model.subscribe("message", function(event){ console.log(event.params.message); }); model.publish("message", {message: "Hello, World"}); Observer This might be a subjective one, as some people choose to think of the Observer pattern as pub/sub, but they have their differences. "The Observer is a design pattern where an an object (known as a subject) maintains a list of objects depending on it (observers), automatically notifying them of any changes to state." - The Observer Pattern The observer pattern is a step beyond typical pub/sub systems. Objects have strict relationships or communication methods with each other. An object "Subject" would keep a list of dependents "Observers". The subject would keep it's observers up-to-date. Reactive Programming Reactive programming is a smaller, more unknown concept, especially in JavaScript. There is one framework/library (that I know of) that exposes an easy to work with API to use this "reactive programming". Resources on reactive programming: What is (functional) reactive programming? Reactive programming Basically, it's having a set of syncing data (be it variables, functions, etc..). var a = 1; var b = 2; var c = a + b; a = 2; console.log(c); // should output 4 I believe reactive programming is considerably hidden, especially in imperative languages. It's an amazingly powerful programming paradigm, especially in Node.js. Meteor has created it's own reactive engine in which the framework is basically based upon. How does Meteor's reactivity work behind the scenes? is a great overview of how it works internally. Meteor.autosubscribe(function() { console.log("Hello " + Session.get("name")); }); This will execute normally, displaying the value of name , but if we change the it Session.set('name', 'Bob'); It will re-output the console.log displaying Hello Bob . A basic example, but you can apply this technique to real-time data models and transactions. You can create extremely powerful systems behind this protocol. Meteor's... Context Implementation ContextSet Implementation Session Implementation Reactive pattern and Observer pattern are quite similar. The main difference is that the observer pattern commonly describes data-flow with whole objects/classes vs reactive programming describes data-flow to specific properties instead. Meteor is a great example of reactive programming. It's runtime is a little bit complicated because of JavaScript's lack of native value change events (Harmony proxies change that). Other client-side frameworks, Ember.js and AngularJS also utilize reactive programming (to some extend). The later two frameworks use the reactive pattern most notably on their templates (auto-updating that is). Angular.js uses a simple dirty checking technique. I wouldn't call this exactly reactive programming, but it's close, as dirty checking isn't real-time. Ember.js uses a different approach. Ember use set() and get() methods which allow them to immediately update depending values. With their runloop it's extremely efficient and allows for more depending values, where angular has a theoretical limit. Promises Not a fix to callbacks, but takes some indentation out, and keeps the nested functions to a minimum. It also adds some nice syntax to the problem. fs.open("fs-promise.js", process.O_RDONLY).then(function(fd){ return fs.read(fd, 4096); }).then(function(args){ util.puts(args[0]); // print the contents of the file }); You could also spread the callback functions so that they aren't inline, but that's another design decision. Another approach would be to combine events and promises to where you would have a function to dispatch events appropriately, then the real functional functions (ones that have the real logic within them) would bind to a particular event. You'd then pass the dispatcher method inside each callback position, though, you'd have to work out some kinks that would come to mind, such as parameters, knowing which function to dispatch to, etc... Single Function Function Instead of having a huge mess of callback hell, keep a single function to a single task, and do that task well. Sometimes you can get ahead of yourself and add more functionality within each function, but ask yourself: Can this become an independent function? Name the function, and this cleans up your indentation and, as a result, cleans up the callback hell problem. In the end, I'd suggest developing, or using a small "framework", basically just a backbone for your application, and take time to make abstractions, decide on an event-based system, or a "loads of small modules that are independent" system. I've worked with several Node.js projects where the code was extremely messy with callback hell in particular, but also a lack of thought before they began coding. Take your time to think through the different possibilities in terms of API, and syntax. Ben Nadel has made some really good blog posts about JavaScript and some pretty strict and advanced patterns that may work in your situation. Some good posts that I'll emphasis: Thinking About Encapsulation And Direct Object References Keeping Modules Decoupled Using Signals And Mediators Managed Dependencies vs. Dependency Injection In RequireJS Inversion-of-Control Though not exactly related to callback hell, it can help you overall architecture, especially in the unit tests. The two main sub-versions of inversion-of-control is Dependency Injection and Service Locator. I find Service Locator to be the easiest within JavaScript, as opposed to Dependency Injection. Why? Mainly because JavaScript is a dynamic language and no static typing exists. Java and C#, among others, are "known" for dependency injection because your able to detect types, and they have built in interfaces, classes, etc... This makes things fairly easy. You can, however, re-create this functionality within JavaScript, though, it's not going to be identical and a bit hacky, I prefer using a service locator inside my systems. Any kind of inversion-of-control will dramatically decouple your code into separate modules that can be mocked or faked at anytime. Designed a second version of your rendering engine? Awesome, just substitute the old interface for the new one. Service locators are especially interesting with the new Harmony Proxies, though, only effectively usable within Node.js, it provides a nicer API, rather then using Service.get('render'); and instead Service.render . I'm currently working on that kind of system: https://github.com/TheHydroImpulse/Ettore . Though the lack of static typing (static typing being a possible reason for the effective usages in dependency injection in Java, C#, PHP - It's not static typed, but it has type hints.) might be looked at as a negative point, you can definitely turn it into a strong point. Because everything is dynamic, you can engineer a "fake" static system. In combination with a service locator, you could have each component/module/class/instance tied to a type. var Service, componentA; function Manager() { this.instances = {}; } Manager.prototype.get = function(name) { return this.instances[name]; }; Manager.prototype.set = function(name, value) { this.instances[name] = value; }; Service = new Manager(); componentA = { type: "ship", value: new Ship() }; Service.set('componentA', componentA); // DI function World(ship) { if (ship === Service.matchType('ship', ship)) this.ship = new ship(); else throw Error("Wrong type passed."); } // Use Case: var worldInstance = new World(Service.get('componentA')); A simplistic example. For a real world, effective usage, you'll need to take this concept further, but it could help decouple your system if you really want traditional dependency injection. You might need to fiddle with this concept a little bit. I haven't put much thought into the previous example. Model-View-Controller The most obvious pattern, and the most used on the web. A few years ago, JQuery was all the rage, and so, JQuery plugins were born. You didn't need a full-on framework on the client-side, just use jquery and a few plugins. Now, there's a huge client-side JavaScript framework war. Most of which use the MVC pattern, and they all use it differently. MVC isn't always implemented the same. If you're using the traditional prototypal interfaces, you might have a hard time getting a syntactical sugar or a nice API when working with MVC, unless you want to do some manual work. Ember.js solves this by creating a "class"/object" system. A controller might look like: var Controller = Ember.Controller.extend({ index: function() { // Do something.... } }); Most client-side libraries also extend the MVC pattern by introducing view-helpers (becoming views) and templates (becoming views). New JavaScript Features: This will only be effective if you're using Node.js, but nonetheless, it's invaluable. This talk at NodeConf by Brendan Eich brings some cool new features. The proposed function syntax, and especially the Task.js js library. This will probably fix most of the issues with function nesting and will bring slightly better performance because of the lack of function overhead. I'm not too sure if V8 supports this natively, last I checked you needed to enable some flags, but this works in a port of Node.js that uses SpiderMonkey . Extra Resources: Pro JavaScript Design Patterns (Recipes: a Problem-Solution Ap) JavaScript: The Good Parts Learning JavaScript Design Patterns Object-Oriented JavaScript: Create scalable, reusable high-quality JavaScript applications and libraries
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180585", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1683/" ] }
180,695
When browsing open-source projects that are primarily developed for Linux systems and downloading the latest packages, the source code is always stored in a .tar.gz or .tar.bz2 file. Is there any reason for using .tar.gz or .tar.bz2 rather than something like .zip or .rar or some other compression algorithm (or even leaving it uncompressed if the project is small enough)?
To answer the question in the heading: tar.gz/tar.bz2 became the standard for distributing Linux source code a very very very long time ago, as in well over 2 decades, and probably a couple more. Significantly before Linux even came into existence. In fact, tar stands for (t)ape (ar)chive. Think reel hard, and you'll get an idea how old it is. ba-dum-bump. Before people had CD burners, distros of software were put out on 1.44Mb floppy disks. The compressed tar file was chopped into floppy-sized pieces by the split command, and these pieces were called tarballs . You'd join them back together with cat and extract the archive. To answer the other question of why not Zip or Rar, that's an easy one. The tar archiver comes from Unix, while the other two come from MS-DOS/Windows. Tar handles unix file metadata (permissions, times, etc), while zip and rar did not until very recently (they stored MS-DOS file data). In fact, zip took a while before it started storing NTFS metadata (alternate streams, security descriptor, etc) properly. Many of the compression algorithms in PKZip are proprietary to the original maker, and the final one added to the Dos/Windows versions was Deflate (RFC 1951) which performed a little better than Implode, the proprietary algo in there that produced the best general compression. Gzip uses the Deflate algorithm. The RAR compression algorithm is proprietary, but there is a gratis open source implementation of the decompressor. Official releases of RAR and WinRAR from RARlab are not gratis . Gzip uses the deflate algorithm, and so is no worse than PKZip. Bzip2 gets slightly better compression ratios. TL;DR version: tar.gz and tar.bz2 are from Unix, so Unix people use them. Zip and Rar are from the DOS/Windows world, so DOS/Windows people use them. tar has been the standard for bundling archives of stuff in *nix for several decades.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180695", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72880/" ] }
180,824
My programming friends are always telling me that HTML is a markup language and C++ is a programming language. When I ask them what's the difference, they never give me a reasonable answer. What could make one call C++ a programming language, and HTML not?
A programming language is a notation designed to pass instructions to a machine. By that definition both C++ and HTML are programming languages, as was the notation Joseph Marie Jacquard used in 1801 to program his looms . However with the proliferation of languages that are used to structure and/or describe data, the definition of a programming language shifted to include only languages that are capable of expressing algorithms. This is the more common definition today and it excludes languages like HTML or XML. At the heart of the current definition is the concept of Turing completeness . Most programming languages are Turing complete, and Turing completeness is often quoted as the one critical trait that separates a programming language from any other computer language. This is good enough as a general rule of thumb, but not entirely accurate: Some non Turing complete languages are considered programming languages, for example Charity . Some languages that are not generally considered programming languages are Turing complete, for example XSLT . Turing completeness alone doesn't say much about a language's usefulness . Depending on context, you can pick any definition you want. Edit: Let it further be known, an implementation of a language does not confer characteristics onto the language itself, for example: A language's spec may define a turing complete language, someone could implement it haphazardly leaving off turing completeness. This implementation being non-turing complete does not however mean the language itself is not turing complete (rather it likely means that implementation is non-conformant). The details of a language and the details of a particular implementation of a language are to be recognized as separate things, this is why it's inaccurate to call a language interpreted or compiled etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180824", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59374/" ] }
180,833
I was using this feature in earlier Windows release like XP and NT. I was able to run a GUI from a Windows service. But it is not possible in the later versions. What is the reason behind the removal of this feature? Why can't Windows services have a GUI?
Mainly security reasons. As I understand it, when a windows service creates GUI controls such as a MessageBox, they were normally only seen in the session that the services runs in ie Session 0 which also used to be the first user logged on locally or by someone logging on using mstsc /admin. Hence this user would see these controls and could interact with the service. But for security reasons, Session 0 is now reserved and the first user to log on will be given a new session and hence not see the GUI Controls. Since this breaks quite a lot of services, for compatibility, there is a process (see this MSDN blog) that attempts to detect if any messages are being displayed and popups with a warning 'A program running on this computer is trying to display a message' and allows you to view or ignore the message. Microsoft have a WhitePaper on this subject which you can download from here I would also suspect that another minor reason is because the feature was misused/misunderstood and led to bad designs. For example, I used to have an old server with a third-party service that displayed some notifications/errors using a message box rather than writing to the event log. But I never logged on locally and rarely logged in in admin mode and hence I would not see the messages.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180833", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75958/" ] }
180,851
During a job interview, I was asked to explain why the repository pattern isn't a good pattern to work with ORMs like Entity Framework. Why is this the case?
The single best reason to not use the repository pattern with Entity Framework? Entity Framework already implements a repository pattern. DbContext is your UoW (Unit of Work) and each DbSet is the repository. Implementing another layer on top of this is not only redundant, but makes maintenance harder. People follow patterns without realizing the purpose of the pattern. In the case of the repository pattern, the purpose is to abstract away the low-level database querying logic. In the old days of actually writing SQL statements in your code, the repository pattern was a way to move that SQL out of individual methods scattered throughout your code base and localize it in one place. Having an ORM like Entity Framework, NHibernate, etc. is a replacement for this code abstraction, and as such, negates the need for the pattern. However, it's not a bad idea to create an abstraction on top of your ORM, just not anything as complex as UoW/repostitory. I'd go with a service pattern, where you construct an API that your application can use without knowing or caring whether the data is coming from Entity Framework, NHibernate, or a Web API. This is much simpler, as you merely add methods to your service class to return the data your application needs. If you were writing a To-do app, for example, you might have a service call to return items that are due this week and have not been completed yet. All your app knows is that if it wants this information, it calls that method. Inside that method and in your service in general, you interact with Entity Framework or whatever else you're using. Then, if you later decide to switch ORMs or pull the info from a Web API, you only have to change the service and the rest of your code goes along happily, none the wiser. It may sound like that's a potential argument for using the repository pattern, but the key difference here is that a service is a thinner layer and is geared towards returning fully-baked data, rather than something that you continue to query into, like with a repository.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180851", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76465/" ] }
180,904
I find header files to be useful when browsing C++ source files, because they give a "summary" of all the functions and data members in a class. Why do so many other languages (like Ruby, Python, Java, etc.) not have a feature like this? Is this an area where C++'s verbosity comes in handy?
Original purpose of header files was to allow single-pass compilation and modularity in C. By declaring the methods before they are used, it allowed for only single compilation pass. This age is long time gone thanks to our powerfull computers being able to do multi-pass compilation without any problems, and sometimes even faster than C++ compilers. C++ being backwards compatible with C needed to keep the header files, but added lot of on top of them, which resulted in quite problematic design. More in FQA . For modularity, header files were needed as metadata about code in modules. Eg. what methods (and in C++ classes) are available in what library. It was obvious to have developer write this, because compile time was expensive. Nowadays, it is no problem to have compiler generate this metadata from code itself. Java and .NET languages do this normally. So no. Header files are not good. They were when we still had to have compiler and linker on separate floppies and compilation took 30 minutes. Nowadays, they only get in the way and are sign of bad design.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180904", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29670/" ] }
180,948
One feature I miss in from functional languages is the idea that operators are just functions, so adding a custom operator is often as simple as adding a function. Many procedural languages allow operator overloads, so in some sense operators are still functions (this is very true in D where the operator is passed as a string in a template parameter). It seems that where operator overloading is allowed, it is often trivial to add additional, custom operators. I found this blog post , which argues that custom operators don't work nicely with infix notation because of precedence rules, but the author gives several solutions to this problem. I looked around and couldn't find any procedural languages that support custom operators in the language. There are hacks (such as macros in C++), but that's hardly the same as language support. Since this feature is pretty trivial to implement, why isn't it more common? I understand that it can lead to some ugly code, but that hasn't stopped language designers in the past from adding useful features that can be easily abused (macros, ternary operator, unsafe pointers). Actual use cases: Implement missing operators (e.g. Lua doesn't have bitwise operators) Mimic D's ~ (array concatenation) DSLs Use | as Unix pipe-style syntax sugar (using coroutines/generators) I'm also interested in languages that do allow custom operators, but I'm more interested in why it has been excluded. I thought about forking a scripting language to add user-defined operators, but stopped myself when I realized that I haven't seen it anywhere, so there's probably a good reason why language designers smarter than me haven't allowed it.
There are two diametrically opposed schools of thought in programming language design. One is that programmers write better code with fewer restrictions, and the other is that they write better code with more restrictions. In my opinion, the reality is that good experienced programmers flourish with fewer restrictions, but that restrictions can benefit the code quality of beginners. User-defined operators can make for very elegant code in experienced hands, and utterly awful code by a beginner. So whether your language includes them or not depends on your language designer's school of thought.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/180948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26988/" ] }
181,140
I was using a popular music package (Ableton Live) when I opened the legal section of the helpfile and saw that the program contained code licenses that appeared to be both free as in freedom and free as in beer. I cannot find an online copy, alas, but if necessary I can list the licensed packages. As far as I can see there are 3 possibilities here: A fairly large company is violating code licensing - highly unlikely, if that were so why would they include the license text? It's actually legit for some reason to charge money for and close source of a package that contains opensource code - this is news to me, definitely. I am misunderstanding something - highly probable.
It depends on which license. There are some free software licenses that are specifically designed to prevent people from doing stuff like that, such as the GNU GPL. They're known as "viral" licenses, because their licensing terms spread to any code you use them with, which keeps you from using a GPL library in a non-GPL (or compatible) program. Other licenses are more concerned with freely sharing code than with pushing a particular ideology. Somewhere in the middle of the spectrum, you have the MPL (Mozilla Public License,) which is non-viral and can be used in proprietary projects, but the license terms require that the MPL code itself remain covered by the MPL, and that any modifications (such as improvements, bugfixes, ports, etc) that you make to the MPL code must be published freely. The idea here is "you get this code freely, so if you improve it, you should contribute your improvements back to the community as payment." And at the far end of the spectrum are the completely open licenses, such as the BSD, MIT and Zlib licenses. They essentially say "this code is free for anyone to use however they want." (With a few restrictions, of course, but there's really not much to them.) People using these licenses are making free use of their code the highest priority. So not all free software licenses are created equal. Take a look at the licenses that are being used here, and what their terms are, and you'll get a better idea of whether or not the developer is complying with them by using them in a proprietary project. Also, there's a fourth possibility: The "fairly large company" could have licensed the product under different terms. A software license is designed to limit users of the software, not the creator of the software, and it's not unheard of for someone to release an open-source library under GPL-style terms, and then also sell commercial licenses for it to people who want to use it in a proprietary project, without their codebase being "infected" by a viral license.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181140", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37265/" ] }
181,176
After watching this talk on REST, Reuse and Serendipity by Steve Vinoski, I wonder if there are business cases in greenfield projects for (XML-)RPC-ish setups, that REST could not solve in a better way. A few RPC-Problems he mentions: Focus on language (fit the distributed system to the language, not the other way around) "Make it look local" (and cope with failure and latency as exceptions rather than the rule) intended to be language-independent, but still has "function calls" across languages as main ingredient IDL boilerplate Illusion of type safety and a few more ... Just to dramatize it a bit, some Google Instant results for RPC vs REST:
In general, RPC offers far more of a language integration than REST. As you mentioned, this comes with a number of problems in terms of scale, error handling, type safety, etc., especially when a single distributed system involves multiple hosts running code written in multiple languages. However, after having written business systems that use RPC, REST, and even both simultaneously, I've found that there are some good reasons to choose RPC over REST in certain cases. Here's the cases where I've found RPC to be a better fit: Tight coupling. The (distributed) components of the system are designed to work together, and changing one will likely impact all of the others. It is unlikely that the components will have to be adapted to communicate with other systems in the future. Reliable communication. The components will communicate with each other either entirely on the same host or on a network that is unlikely to experience latency issues, packet loss, etc.. (This still means you need to design your system to handle these cases, however.) Uniform language. All (or mostly all) components will be written in a single language. It is unlikely that additional components written in a different language will be added in the future. Regarding the point about IDL, in a REST system you also have to write code that converts the data in the REST requests and responses to whatever internal data representation you are using. IDL sources (with good comments) can also serve as documentation of the interface, which has to be written and maintained separately for a REST API. The above three items often occur when you are looking to build one component of a larger system. In my experience, these components are often ones where their subsystems need to be able to fail independently and not cause the total failure of other subsystems or the entire component. Many systems are written in Erlang to accomplish these goals as well, and in some cases Erlang may be a better choice than writing a system in another language and using RPC just to gain these benefits. Like most engineering problems, there isn't a single solution to the problem of inter-process communication. You need to look at the system you are designing and make the best choice for your use case.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181176", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/436/" ] }
181,342
I have been programming for about a year and I am really interested in data analysis and machine learning. I am taking part in a couple of online courses and am reading a couple of books. Everything I am doing uses either R or Python and I am looking for suggestions on whether or not I should concentrate on one language (and if so which) or carry on with both; do they complement each other? -- I should mention that I use C# in school but am familiar with Python through self-study.
I use both Python (for data analysis ofcourse including numpy and scipy) and R next to each other. However, I use R exclusively to perform data analysis, and Python for more generic programming tasks (e.g. workflow control of a computer model). In terms of basic operations, say operations on arrays and the sort, R and Python + numpy are very comparable. It is in the very large library of statistical functions that R has an advantage. In addition, matplotlib does not seem to be as good as ggplot2, but I have not used matplotlib that much. In addition, I would focus first on one language and become good at the specifics of that. You seem to be primairily interested in data analysis, not software engineering. I would pick R and stick to that. That said, I think choosing for Python + numpy + scipy + scikit is defintely and excellent choice, it is just that I feel that R is just a bit more excellent. I would also take a look around you what your colleagues and other people in your field are using. If they all use, say, Python, it would make sense to stick to that in order to more easily learn from them and exchange code. Disclaimer: Note that I am a heavy R user, so my opinion might be biased, although I have tried to keep my answer as objective as possible. In addition, I have not used Python + numpy extensively, altough I know collegaues who do all their data analysis in it. ps: This link might be interesting: http://seanjtaylor.com/post/39573264781/the-statistics-software-signal pps: or this quote from this post : I use R and Python for all my research (with Rcpp or Cython as needed), but I would rather avoid writing in C or C++ if I can avoid it. R is a wonderful language, in large part because of the incredible community of users. It was created by statisticians, which means that data analysis lies at the very heart of the language; I consider this to be a major feature of the language and a big reason why it won't get replaced any time soon. Python is generally a better overall language, especially when you consider its blend of functional programming with object orientation. Combined with Scipy/Numpy, Pandas, and statsmodels, this provides a powerful combination. But Python is still lacking a serious community of statisticians/mathematicians.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181342", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75239/" ] }
181,459
During a recent code review I was asked to put default cases in all the files wherever switch block is used, even if there is nothing to do in default . That means I have to put the default case and write nothing in it. Is this the right thing to do? What purpose would it serve?
It seems there are three cases when a default statement is not necessary: no other cases are left, because there is a limited set of values that enter the switch case . But this might change with time (intentionally or accidentally), and it would be good to have a default case if anything changes _ you could log or warn the user about a wrong value. you know how and where the switch case will be used and what values will enter it. Again, this might change and an extra-processing might be needed. other cases do not need any special processing. If this is the case, I think you are asked to add a default case , because it is an accepted coding style, and it makes your code more readable. The first two cases are based on assumptions. So (assuming you work in not-so-small team since you have regular code reviews), you cannot afford making those assumptions. You don't know who will be working with your code or making calls to functions/invoking methods in your code. Similarly, you might need to work with someone else's code. Having the same coding style will make it easier to deal with someone's (including your) code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181459", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76637/" ] }
181,527
gdb implemented support for reverse debugging in 2009 (with gdb 7.0). I never heard about it until 2012. Now I find it extremely useful for certain types of debugging problems. I wished that I heard of it before. Correct me if I'm wrong but my impression is that the technique is still rarely used and most people don't know that it exists. Why? Do you know of any programming communities where the use of reverse debugging is common? Background information: Stackoverflow: How does reverse debugging work? gdb uses the term "reverse debugging" but other vendors use other terms for identical or similar techniques: Microsoft calls it IntelliTrace or "Historical Debugging" There's a Java reverse debugger called Omniscient Debugger , though it probably no longer works in Java 6 There are other Java reverse debuggers OCaml's debugger (ocamldebug) calls it time travel
For one, running in debug mode with recording on is very expensive compared to even normal debug mode; it also consumes a lot more memory. It is easier to decrease the granularity from line level to function call level. For example, the standard debugger in eclipse allows you to "drop to frame," which is essentially a jump back to the start of the function with a reset of all the parameters (nothing done on the heap is reverted, and finally blocks are not executed, so it is not a true reverse debugger; be careful about that). Note that this has been available for several years now and works hand in hand with hot-code replacement.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181527", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76071/" ] }
181,536
I'm trying to do research on these css preprocessors. Are there any advantages of using sass over less as a css preprocessor? Factors that i'm looking for include community size, software project maturity, etc. I know there was another question related to this, but it was not constructively written and closed. Update Looks like bootstrap now has gone over to use Sass
Chris Coyier has an awesome rundown of SASS vs LESS over at css-tricks.com . It's definitely worth the read. As for some of your specific questions: Community I work entirely with SASS/Compass, so I'm not intimately familiar with LESS's community, but nor have I really needed SASS's community. Their documentation is fantastic and has solved any problems I've run into thus far. For what it's worth, though, here are the SASS vs LESS statistics that Chris has in his post, which I've updated for current numbers: Number of open issues on LESS: 121 Number of open issues on Sass: 87 Pending pull requests on LESS: 13 Pending pull requests on Sass: 8 Number of commits in the last month in LESS: 49 Number of commits in the last month in Sass: 7 To note, these numbers were roughly flip-flopped as of Chris' writing in May, 2012. This says to me that they're both pretty equal with regards to development activity. Maturity Technically speaking, Sass is older. It came out in 2007, while LESS came out in 2009. That said, the comparisons I've seen put both of them at pretty much the same level of "maturity" in regards to features and whatnot. Both also have frameworks that bring more tools to them. LESS has Less Framework, and Centage (additionally, the Twitter Bootstrap is built with LESS). Sass has Compass, Gravity, and Susy. Both probably have more, if you dig for them, but those are some of the first that come up when you search. So are there any real differences? When it comes to writing it, not really. If you use the CSS-like SCSS syntax in Sass (instead of the more Python-like SASS syntax), you have only the typical minor syntax differences ( @ vs $ ), but for the most part, they're basically the same. The two biggest differences in coding I found were a) how they handle units when doing math, and b) how they handled inheritance. When given something like 20px + 2em , LESS will drop the second unit and assume you mean the first (yeilding 22px ), while Sass will throw an error (basically, type mismatch). With inheritance, LESS treats it like a mixin (I can't really explain it well, so see the inhertance section of this Tuts+ article for details). Whether one is superior over the other kind of depends on how you'd prefer it to handle things. The other biggest difference I know of is how and where each one compiles by default. Sass uses Ruby and compiles on the server, allowing you to store and send the compiled CSS file to the client. LESS, on the other hand, defaults to using the less.js script to compile the CSS on the fly. However, with the use of Node.js, LESS can compile on the server side in the same way that Sass does. Which One? So, if they're basically the same, which one should you use? Well, unless you really love the Python-like SASS syntax, or really think that client-side compilation is the way to go, or you greatly prefer one's inheritance calling over the other, it's going to matter more whether you prefer to have (or already have) Ruby or Node.js installed.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181536", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28477/" ] }
181,563
I often hear people say that C++ programmers should expose their library's/product's public API as a C API. What does that mean and what are the advantages of that?
It does mean that the part of your library that is exposed as an interface only uses the C "part" of the language, so you're not exporting classes or similar, only functions, PODs and structs containing PODs. Plus, you have to disable the C++ name mangling, usually achieved by marking functions as extern "C" . A typical example would be: extern "C" void foo(int bar); The big advantage of exposing your libraries in this fashion is that pretty much every programming language out there has a mechanism to directly interface with a C library, but only very few can also interface directly with a C++ library. So in that sense, you go for the lowest common denominator to make it easy for other people to use your library. However keep in mind that this is really only a useful strategy if you are producing a library for other people to consume. If you are building a piece of C++-only software and the libraries only need to interface with each other, you are (IMHO) better off exposing proper C++ APIs so you can make use of the full power of the language.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181563", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/48441/" ] }
181,567
Where I work I see lots of classes that do things like this: public class ClassThatCallsItsOwnGettersAndSetters { private String field; public String getField() { return field; } public void setField(String field) { this.field = field; } public void methodWithLogic() { setField("value"); //do stuff String localField = getField(); //do stuff with "localField" } } If I wrote this from scratch, I would have written the methodWithLogic() like this instead: public class ClassThatUsesItsOwnFields { private String field; public String getField() { return field; } public void setField(String field) { this.field = field; } public void methodWithLogic() { field = "value"; //do stuff //do stuff with "field" } } I feel that when the class calls its own getters and setters, it makes the code harder to read. To me it almost implies that complex logic is happening in that method call even though in our case it almost never is. When I'm debugging some unfamiliar code, who's to say that the bug isn't some side effect in that method? In other words, it makes me take lots of side trips on the journey of understanding the code. Are there benefits to the first method? Is the first method actually better?
I won't say which is better or worse, because that partly depends on your situation (in my opinion). But consider that your getters and setters may change implementation later, and bypassing them would skip that logic. For example, what happens if you add a "dirty" flag to some setter fields later? By calling your setters in your internal code you'll set the dirty flag without changing any other code. In many situations this would be a good thing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181567", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56945/" ] }
181,577
Theoretically, if I were to build a program that allocated all the unused memory on a system, and continued to request more and more memory as other applications released memory that they no longer need, would it be possible to read recently released memory from another applications? Or is this somehow protected by modern operating system? I have no practical application for this, I'm just curious. I realize there are some issues with allocating "all available memory" in real-life. Edit: To clarify, I'm asking specifically about "Released" memory, not accessing memory that is currently allocated by another application.
No, because a good kernel wipes the contents of memory before it is issued to a process to protect against exactly the kind of attack you propose. On Unixy systems, memory is allocated to processes by extending what's called the program break , which is the limit of virtually-addressable space a process can use. A process tells the kernel it wants to extend its addressable space, and the kernel will allow it if memory is available or the call will fail if not. (The name of the brk() system call comes from this concept.) In practice, large blocks of freed memory don't often butt up against the program break, which is what would be required for a process to return memory to the kernel by shrinking the program break. This is, of course, all dependent on your system's implementation of malloc() and free() . If you have sources available, they'll tell you whether or not memory is ever returned. There are no security implications for malloc() not initializing memory because anything it got via brk() will have been scrubbed and anything previously free() d will have been written by the same process.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181577", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66745/" ] }
181,651
I've been learning SQL recently and practicing with MySQL/Postgres and soon Oracle DB. I've also search the web for a 'road map' study of databases but couldn't find one, unfortunately. I want to understand where and why particular database concepts fall on the scale from beginner to intermediate and advanced. I'm thinking about relational databases for the most part. Please explain how to lay out skills listed below in a progression of Beginner -> Intermediate -> Advanced for what level of developer should know them: Where clauses Update syntax Joins Alter and Create statements Temp tables Cursors Indexes Foreign keys Constraints Transactions Subqueries Pivots Aggregate functions Profiling OLAP and OLTP Triggers Execution plans Execution hints Performance counters Normalization
I would say there are two types of things to know in regards to SQL (this is true of many technologies really), there are specific technical things like joins, subqueries, unions, etc which you understand or don't, and then there's things like database design and data modelling which has a gradient of skill in it, like art. You get better over time with those softer things, but never "know" them because there's no "they work this way" with them. That said, here's a general layout I would go with, and am absolutely open to comments/thoughts from others where I may be wrong. Beginner Where clauses (in, between, etc) Update syntax Inner vs left vs right join understanding and usage Syntax for altering and creating structures Temp tables and their usage Cursors Basic idea what indexes are for, though not how they work Understanding of what foreign keys are for and how to work around them (cascading deletes etc) Understands basics of transactions Understands constraints Intermediate How indexes work, difference between clustered, non-clustered, etc, what a page is and how they layout Understanding of subqueries, and can think through using them in joins and wheres Pivots Can think through joining a table on itself when relevant Can generate complex data reports via group bys with aggregate functions Can do basic profiling just in a monitoring/debugging capacity like reading a log Understands the difference between OLAP and OLTP and when/where to use OLAP structures Knows how to use triggers and not to use them Understands transactions and can layer them handling failures up the stack Advanced Can read an execution plan, and understand how the different parts of the query effect it Can tune queries with execution hints without screwing up performance (parallelism hints, index hints, loop hints, et al) Can profile and use traces for identifying and understanding statistics of executions under real-world load Knows what the data structures are on the disk Can use performance counters and understand what the database load and behaviour is from monitoring them Knows how to design an OLAP cube and do advanced data mining with one Knows how to use triggers and how to use them safely, with minimal risk Knows how to use distributed transactions even with layers That's all I could come up with off the top of my head. Please leave comments mentioning others I missed or if I put something in the wrong place. I'm not advanced enough to know a huge list of advanced techniques to put down heh
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181651", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2849/" ] }
181,730
After you have done a good normalization, do you still need to index the table? How is this going to affect the performance? Will it even affect the performance in some way after a good normalization? Which columns are usually indexed if you already have the primary key and the foreign key? It's seems like it's already effective to normalize a database. But, I might have skipped how indexing affects the database. Is this only effective when queries are used? How does this work/perform and make a database better?
I think you misunderstood what indexing does for database performance. An index helps the database find rows. Indexes are specialized data-structures that, in exchange for extra disk-space and some performance when inserting and updating, help the database engine home in on matching rows. Because they take extra space and cost (a modicum of) performance to keep them up-to-date, you as a database designer must create indexes explicitly to suit your application and query patterns. Indexes are orthogonal to database normalization.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181730", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73810/" ] }
181,780
I just refactored my project's entire codebase. So much so that even though it uses most of the same code base, things work in a radically different way. If the old version was 1.0, the new one would be 2.0. The project itself is just under 1mb in size (its a tiny little lib). I started the project a long time ago and its undergone many changes... so many that my git folder is now over 3mb in size. In this case 3mb is a very small amount of data, but from a 'big picture' perspective, when should you cut your previous VCS history out of your current project and start over? Or should you never ever do this?
Storage is cheap, just keep it. Sometimes you have to check if some bug was present in the pre-refactor version or reference some old piece of code. If you really are concerned about the git checkout size then sure, you can go and delete the history, but I'd suggest starting a new repo in this case and leave the refactor commit in old one, so that when you really need it then you can easily connect them. A git specific note: if you don't want the history for current checkout (because you just want to compile the project or create patches for current problem) you can use --depth=1 , then git won't clone any history. Second thing: you can create an --orphan branch that won't share any history with other branches in repo, you can then clone only this branch and its history omitting all other objects in remote git repo.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181780", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/70005/" ] }
181,809
I'm rather new to C, and I'm wondering if code duplication is a necessary evil when it comes to writing common data structures and C in general? I could try to write a generic implementation for a hash map for example, but I'm always finding the end result to be messy. I could also write a specialized implementation just for this specific use case, keep the code clear and easy to read and debug. The latter would of course lead to some code duplication. Are generic implementations a norm, or do you write different implementations for each use case?
C makes it difficult to write generic code. Unlike C++, which gives you templates and virtual functions, C only has 3 mechanisms for writing generic code: void* pointers Preprocessor macros Function pointers void* pointers are far from ideal, since you lose all type safety provided by the compiler, which can result in hard-to-debug undefined behavior resulting from invalid type casts. Preprocessor macros have well-noted drawbacks - preprocessor expansion is basically just a find/replace mechanism that happens before the compilation phase, which again can result in hard-to-debug errors. The archetypal example being something like: #define add(x) (x+x) , where x can be incremented twice if you call add(i++) . You can write template-style generic code entirely using C-macros, but the result is really hideous and difficult to maintain . Function pointers provide a good way to write generic code, but unfortunately they don't provide you with type generality - they merely provide the possibility of run-time polymorphism (which is why, for example, the standard library qsort still requires a function that takes void* pointers.) You can also implement class hierarchies in C using structs, as is done in the GLib library which provides a generic GObject base class. But this suffers from similar problems as using void* pointers, since you still need to rely on potentially unsafe manual casting to up-cast and down-cast. So yeah, C makes it hard to write code that is both generic AND safe/easy-to-maintain, which unfortunately can result in code duplication. Large C projects often use scripting languages to generate repetitive code during the build process.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181809", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35604/" ] }
181,817
One of the differences between svn and git is the ability to control access to the repository. It's hard to compare the two because there is a difference of perspective about who should be allowed to commit changes at all! This question is about using git as a centralized repository for a team at a company somewhere. Assume that the members of the team are of varying skill levels, much the way they are at most companies. Git seems to assume that your only your best (most productive, most experienced) programmers are trusted to check in code. If that's the case, you are taking their time away from actually writing code to review other people's code in order to check it in. Does this pay off? I really want to focus this question on what is the best use of your best programmer's time, not on best version-control practices in general . A corollary might be, do good programmers quit if a significant portion of their job is to review other people's code? I think both questions boil down to: is the review worth the productivity hit?
Since it's not clear from your question, I just want to point out that a gatekeeper workflow is by no means required with git. It's popular with open source projects because of the large number of untrusted contributors, but doesn't make as much sense within an organization. You have the option to give everyone push access if you want. What people are neglecting in this analysis is that good programmers spend a lot of time dealing with other programmers' broken code anyway. If everyone has push access, then the build will get broken, and the best programmers tend to be the ones frequently integrating and tracking down the culprits when things break. The thing about everyone having push access is that when something breaks, everyone who pulls gets a broken build until the offending commit is reverted or fixed. With a gatekeeper workflow, only the gatekeeper is affected. In other words, you are affecting only one of your best programmers instead of all of them. It might turn out that your code quality is fairly high and the cost-benefit ratio of a gatekeeper is still not worth it, but don't neglect the familiar costs. Just because you are accustomed to that productivity loss doesn't mean it isn't incurred. Also, don't forget to explore hybrid options. It's very easy with git to set up a repository that anyone can push to, then have a gatekeeper like a senior developer, tester, or even an automated continuous integration server decide if and when a change makes it into a second, more stable repository. That way you can get the best of both worlds.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181817", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62323/" ] }
181,921
So every number in the code that we are sending to a method as an argument is considered as a Magic Number? To me, it shouldn't. I think if some number is let's say it is for minimum length of user name and we start using "6" in the code...then yeah we have a maintenance problem and here "6" is a magic number....but if we are calling a method that one of its arguments accept an integer for example as the ith member of a collection and then we pass "0" to that method call, in this case I don't see that "0" as a magic number. What do you think?
If the meaning of the number is very clear in the context, I don't think it's a "magic number" problem. Example: Let's say you're trying to get the substring of a string, from the beginning to some token and the code looks like this (imaginary language and library): s := substring(big_string, 0, findFirstOccurence(SOME_TOKEN, big_string)); In this context, the meaning of the number 0 is clear enough. I suppose you could define START_OF_SUBSTRING and set it to 0, but in this case I think it would be overkill (although it would be the correct approach if you knew that the start of your substring might not be 0, but that depends on the specifics of your situation). Another example might be if you are trying to determine if a number is even or odd. Writing: isEven := x % 2; is not as strange as: TWO := 2; isEven := x % TWO; Testing negative numbers as MINUS_ONE := -1; isNegativeInt := i <= MINUS_ONE; also feels weird to me, I'd much rather see isNegativeInt := i <= -1;
{ "source": [ "https://softwareengineering.stackexchange.com/questions/181921", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35793/" ] }
182,037
When I first started working a mainframe assembler programmer showed me how they swap to values without using the traditional algorithm of: a = 0xBABE b = 0xFADE temp = a a = b b = temp What they used to swap two values - from a bit to a large buffer - was: a = 0xBABE b = 0xFADE a = a XOR b b = b XOR a a = a XOR b now b == 0xBABE a == 0xFADE which swapped the contents of 2 objects without the need for a third temp holding space. My question is: Is this XOR swap algorithm still in use and where is it still applicable.
When using xorswap there's a danger of supplying same variable as both arguments to the function which zeroes out the said variable due to it being xor 'd with itself which turns all the bits to zero. Of course this itself would result in unwanted behavior regardless of algorithm used, but the behavior might be surprising and not obvious at first glance. Traditionally xorswap has been used for low-level implementations for swapping data between registers. In practice there are better alternatives for swapping variables in registers. For example Intel's x86 has a XCHG instruction which swaps the contents of two registers. Many times a compiler will figure out the semantics of a such function (it swaps contents of the values passed to it) and can make its own optimizations if needed, so trying to optimize something as trivial as a swap function does not really buy you anything in practice. It's best to use the obvious method unless there's a proven reason why it would be inferior to say xorswap within the problem domain.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182037", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/77075/" ] }
182,057
In agile methodologies (e.g. SCRUM), the complexity/effort needed for user stories are measured in Story points. Story points are used to calculate how many user stories a team can take in an iteration. What is the advantage of introducing an abstract concept (story points), where we can just use a concrete measurement, like estimated man-days? We can also calculate velocity, estimate coverage of an iteration, etc. using estimated man-days. In contrast, story points are harder to use (because the concept is abstract), and also harder to explain to stakeholders. What advantage does it offer?
I think one of the main advantages is that humans and developers specifically are actually pretty bad at estimating time. Think of the nature of development too -- it's not some linear progression from start to finish. It's often "write 90% of the code in 10 minutes and then tear your hair out debugging for 17 hours." That's pretty hard to estimate in the clock timing sense. But using an abstraction takes the focus off of the actual time in hours or days and instead puts the focus on describing the relative expense and complexity of a task as compared to other tasks. Humans/developers are better at that. And then, once you get humming with those point estimates and some actual progress, you can start to look at time more empirically. I suspect that there is also an observer effect that happens with time estimates that wouldn't happen with point estimates. For instance, the incentive to sandbag an estimate and deliver way "ahead of schedule" is going to be muted with indirection in a point based system.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182057", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8486/" ] }
182,093
I'm a python beginner, and I just learned a technique involving dictionaries and functions. The syntax is easy and it seems like a trivial thing, but my python senses are tingling. Something tells me this is a deep and very pythonic concept and I'm not quite grasping its importance. Can someone put a name to this technique and explain how/why it's useful? The technique is when you have a python dictionary and a function that you intend to use on it. You insert an extra element into the dict, whose value is the name of the function. When you're ready to call the function you issue the call indirectly by referring to the dict element, not the function by name. The example I'm working from is from Learn Python the Hard Way, 2nd Ed. (This is the version available when you sign up through Udemy.com ; sadly the live free HTML version is currently Ed 3, and no longer includes this example). To paraphrase: # make a dictionary of US states and major cities cities = {'San Diego':'CA', 'New York':'NY', 'Detroit':'MI'} # define a function to use on such a dictionary def find_city (map, city): # does something, returns some value if city in map: return map[city] else: return "Not found" # then add a final dict element that refers to the function cities['_found'] = find_city Then the following expressions are equivalent. You can call the function directly, or by referencing the dict element whose value is the function. >>> find_city (cities, 'New York') NY >>> cities['_found'](cities, 'New York') NY Can someone explain what language feature this is, and maybe where it comes to play in "real" programming? This toy exercise was enough to teach me the syntax, but didn't take me all the way there.
Using a dict let's you translate the key into a callable. The key doesn't need to be hardcoded though, as in your example. Usually, this is a form of caller dispatch, where you use the value of a variable to connect to a function. Say a network process sends you command codes, a dispatch mapping lets you translate the command codes easily into executable code: def do_ping(self, arg): return 'Pong, {0}!'.format(arg) def do_ls(self, arg): return '\n'.join(os.listdir(arg)) dispatch = { 'ping': do_ping, 'ls': do_ls, } def process_network_command(command, arg): send(dispatch[command](arg)) Note that what function we call now depends entirely on what the value is of command . The key doesn't have to match either; it doesn't even have to be a string, you could use anything that can be used as a key, and fits your specific application. Using a dispatch method is safer than other techniques, such as eval() , as it limits the commands allowable to what you defined beforehand. No attacker is going to sneak a ls)"; DROP TABLE Students; -- injection past a dispatch table, for example.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182093", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72437/" ] }
182,094
I am working on developing a relational database that tracks transactions that occur on a device I'm working on for my company. There are different types of transactions that could occur on the device, so we have a "trans_type" field in one of our main record tables. My group has decided to make the type of this field an integer and treating it as an enumerated type. My intuition tells me that it would be a better idea to make this field a string so that our database data would be more readable and usable. My co-workers seem to be worried that this would cause more trouble than it is worth. That string comparisons are too costly and the possibility of typos is too great of a barrier. So, in your opinion, when dealing with a field in a relational database that is essentially an enumerated value, is it a better design decision to make this field an integer or a string? Or is there some other alternative I've overlooked? Note: explicit enumerated types are not supported by the database we are using. And the software we are developing that will interface with this database is written in C++.
Enumerated types should be a separate table in your database that have an id number and a string name and any other columns you might find useful. Then each type exists as a row in this table. Then in your table you are recording the transactions the "trans_Type" field should be a foreign key to the key of that reference table. This is a standard practice in database normalization. This way you have stored the one official name string, get to use number comparisons for performance, and have referential integrity that every transaction has a valid type.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182094", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65474/" ] }
182,103
Getting a few projects started with EF, but I had some questions about join tables and keys etc. Lets say I have a table of applications and a table of permissions. Applications have many permissions and each permission can belong to many applications (many-to-many). Now, the Application and Permission tables are easy: Applications -------------- PK ApplicationID Name Permissions -------------- PK PermissionID Name But what's the BEST way to do the join table? I have these two options: ApplicationPermissions ----------------------- PK ApplicationPermissionID CU ApplicationID CU PermissionID OR ApplicationPermissions ----------------------- CPK ApplicationID CPK PermissionID PK = Primary Key CPK = Composite Primary Key CU = Composite Unique Index Have you ever been burned doing it one way over the other? is it strictly preference? It has occurred to me that a lot of the "differences" will be abstracted away by my repository pattern (for example, i would almost never create an entire permission object and add it to an application, but do it by ID or unique name or something), but I guess I'm looking for horror stories, one way or the other.
I believe you mean "junction" table, not "join" table. There is no need for a junction table to have it's own ID field. You would never need to join or filter on such an ID. You would only join or filter on the ID's of the tables you're mapping. An ID on a junction table is a waste of disk space. So the "best" option is to avoid the ID. Typically a junction table will have 2 covering indexes. Each covering index using one of the mapped ID's as the primary sort field. But "best" is not by a long shot. It's a very minor issue to have a redundant ID field. You will not have any horror stories over a small amount of wasted disk. The ID won't "steal" the clustered index because you don't want to cluster on the mapped combo anyway. If your framework wants all tables to have an ID, then go for it. If your team's database standards dictate all tables must have an ID then go for it. If not, then avoid it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182103", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59764/" ] }
182,113
I always have trouble figuring out if I should name a certain method starting with getSomething versus findSomething . The problem resides in creating helpers for poorly designed APIs. This usually occurs when getting data from an object, which requires the object as a parameter. Here is a simple example: public String getRevision(Item item) { service.load(item, "revision"); // there is usually more work to do before getting the data.. try { return item.get_revision(); } catch(NotLoadedException exception) { log.error("Property named 'property_name' was not loaded", exception); } return null; } How and why to decide between naming this method as getRevision() or findRevision() ?
I use Get when I know the retrieval time will be very short (as in a lookup from a hash table or btree). Find implies a search process or computational algorithm that requires a "longer" period of time to execute (for some arbitrary value of longer).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182113", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53251/" ] }
182,158
As someone who's still new to agile, I'm not sure I completely understand the relationship or difference between user story, feature, and epic. According to this question , a feature is a collection of stories. One of the answers suggests that a feature is actually an epic. So are features and epics considered the same thing, that is basically a collection of related user stories? Our project manager insists that there's a hierarchical structure: Epic -> Features -> User stories And that basically all user stories must fall within this structure. Therefore all user stories must fall under an umbrella feature and all features must fall under an epic. To me, that sounds awkward. Can someone please clarify how user stories, features, and epics are related? Or is there an article that clearly outlines the differences?
They are very generic term actually. There is many way to interpret them, varying in the literature and how people see them. Take everything I say with a huge grain of salt. Usually, an Epic comprise a very global and not very well defined functionality in your software. It is very broad. It will usually be broken down into smaller user story or feature when you try to make sense of it and making them fit in an agile iteration. Example : Epic Allow the customer to manage its own account via the Web Feature and User Story are more specific functionality, that you can easily test with acceptance tests. It is often recommended that they be granular enough to fit in a single iteration. Features usually tend to describe what your software do : Feature Editing the customer information via the web portal User stories tend to express what the user want to do : User story As bank clerk, I want to be able to modify the customer information so that I can keep it up to date. I don't think there is really a hierarchy between the two, but you can have one if you want or if it fit how you work. A user story can be a specific justification for a feature, or a specific way to do it. Or it can be the other way around. A feature can be a way to realize a user story. Or they can denote the same thing. You can use both : User stories to define what bring business value and feature to describe constraint of the software. User story : as a customer, I want to pay with the most popular credits cards Feature support the GOV-TAX-02 XML API of the government. There is also the question of scenario, which are usually a way a Feature/User story will be executed. They usually map cleanly to a specific acceptance test. For example Scenario : Withdrawing money Given I have 2000$ in my bank account When I withdraw 100$ Then I receive 100$ in cash And my balance is 1900$ That is how we define those terms where I work . Those definitions are far from a mathematical definition or a standardized term. Its like the difference between a right wing politician or a left wing politician. It depend where you live. In Canada, what is considered right wing may be considered left-wing in the United State. It's very variable. Seriously, I wouldn't worry too much about it. The important is that everyone on the team agree on a definition so you can understand each other. Some method like scrum tend to define them more formally, but pick what work for you and leave the rest. After all, isn't agile about Individuals and interactions over processes and tools and Working software over comprehensive documentation ?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182158", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45/" ] }
182,282
Today I got yelled at for developing an application on a production server. Quote, " developing on a production server is not acceptable - ever! " Here is the situation. I set up a development instance: http://example.com:3000 The production instance is: http://example.com I complete all my development work on http://example.com:3000 and when the client is pleased with the changes, I move them over to http://example.com . The application I am working with is an old Ruby on Rails application, and I have to say that initially, I did try to set up an development environment locally, but I could never get it running. After trying for a while, I gave up and decided to develop on the production server. Again, am I an idiot? Probably so, but I've been doing web development for a couple of years now, and I have never encountered a situation like this. Who is right and why?
I used to develop on the production server. It can work fine, but it is inadvisable for at least two reasons: Development code can cause infinite loops, memory leaks, or other problems that lock up the CPU, eat up all the memory, or otherwise affect the server in a way that will impact your production code. If you need to make changes to components of the server environment as part of your development work, like the version of Ruby or MySQL or whatever, you'll be in a bind.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182282", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/78222/" ] }
182,306
Why is it often said that the test cases need to be made before we start coding? What are its pros and what the cons if we don't listen to this advice? Moreover, does that advice refer to black box testing or White Box testing or both?
Writing tests before implementations is one of the core ideas behind Test Driven Development (TDD). The procedure is: write a failing test change the code to make it pass, without breaking any other test refactor the code, verify that all tests still pass This procedure is the same whether you implement a feature or fix a bug; it is often referred to as "Red-Green-Refactor" (red = test fails, green = test passes). The advantages of this method are numerous: the test clearly defines what constitutes "done" the test documents how you intend the code to be used if you do it right, you build a complete test suite with 100% coverage as a by-product of your development process, which means you always have reliable regression tests at hand no matter what you do coding to meet a test case (and nothing else) helps keep you focused on one task and prevents feature creep the red-green-refactor cycle makes for a nice, clean and tracable commit history and fits a feature branch repository approach well (each feature branch starts with a commit that adds a failing test, then one or more commits until "green" is reached, and then one or more refactoring commits). you produce working versions at regular intervals - ideally, each red-green-refactor cycle ends with a potentially shippable product, but at the very least, it should build without failures and pass all the tests. Now for the downsides: Not all code lends itself to automated testing. This can be because it's a legacy codebase written without automated tests in mind; it can be because the problem domain is such that certain things cannot be mocked; or maybe you rely on external data sources that are not under your control and would be too complicated to mock; etc. Sometimes, correctness, maintainability and stability are not high enough on the priority list. This sounds bad, but it doesn't have to be: if you're writing a one-use script to batch convert a bunch of documents, writing tests at all is silly - you just run it on a few test documents, eyeball the result, and that's that. Automated testing wouldn't add any value here. Writing the test before writing the code implies that you already know exactly what you want done. Quite often, though, you don't, and a significant amount of explorative programming is required to get a feeling for the particular problem domain; and more often than not, the results of such explorative coding are then bashed into shape to become the real product. Automated tests are just as valuable for such products, but it doesn't make much sense adding them before you know what you're going to make. In this case, it's better to take the prototype at the end of the explorative phase, add tests to cover everything you have so far, and switch to red-green-refactor from there. Test-before-implementation isn't everyone's cup of tea; writing the implementation first feels more natural, and some people have an easier time getting into a flow that way
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182306", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23355/" ] }
182,314
I was reading about some development interview practices, specifically about the technical questions and tests asked at interviews and I've stumbled quite a few times over sayings of the genre "Ok you solved the problem with a while loop, now can you do it with recursion", or "everyone can solve this with a 100 lines while loop, but can they do it in a 5 lines recursive function?" etc. My question is, is recursion generally better than if/while/for constructs? I've honestly always thought that recursion is not to be preferred, because it's limited to the stack memory which is a lot smaller than the heap, also doing a great number of function/method calls is suboptimal from a performance standpoint, but I may be wrong...
Recursion is not intrinsically better or worse than loops - each has advantages and disadvantages, and those even depend on the programming language (and implementation). Technically, iterative loops fit typical computer systems better at the hardware level: at the machine code level, a loop is just a test and a conditional jump, whereas recursion (implemented naively) involves pushing a stack frame, jumping, returning, and popping back from the stack. OTOH, many cases of recursion (especially those that are trivially equivalent to iterative loops) can be written so that the stack push / pop can be avoided; this is possible when the recursive function call is the last thing that happens in the function body before returning, and it's commonly known as a tail call optimization (or tail recursion optimization ). A properly tail-call-optimized recursive function is mostly equivalent to an iterative loop at the machine code level. Another consideration is that iterative loops require destructive state updates, which makes them incompatible with pure (side-effect free) language semantics. This is the reason why pure languages like Haskell do not have loop constructs at all, and many other functional-programming languages either lack them completely or avoid them as much as possible. The reason why these questions appear so much in interviews, though, is because in order to answer them, you need a thorough understanding of many vital programming concepts - variables, function calls, scope, and of course loops and recursion -, and you have to bring the mental flexibility to the table that allows you to approach a problem from two radically different angles, and move between different manifestations of the same concept. Experience and research suggest that there is a line between people who have the ability to understand variables, pointers, and recursion, and those who don't. Almost everything else in programming, including frameworks, APIs, programming languages and their edge cases, can be acquired through studying and experience, but if you are unable to develop an intuition for these three core concepts, you are unfit to be a programmer. Translating a simple iterative loop into a recursive version is about the quickest possible way of filtering out the non-programmers - even a rather inexperienced programmer can usually do it in 15 minutes, and it's a very language-agnostic problem, so the candidate can pick a language of their choice instead of stumbling over idiosyncracies. If you get a question like this in an interview, that's a good sign: it means the prospective employer is looking for people who can program, not people who have memorized a programming tool's manual.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/182314", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38218/" ] }
183,576
Stumbled upon this post that talks about making async web requests. Now simplicity aside, if in real world, all you do is make an async request and wait for it in the very next line, isn't that the same as making a sync call in the first place?
No, async + await != sync , because of continuation From MSDN 'Asynchronous Programming with Async and Await (C# and Visual Basic)' Async methods are intended to be non-blocking operations. An await expression in an async method doesn’t block the current thread while the awaited task is running. Instead, the expression signs up the rest of the method as a continuation and returns control to the caller of the async method . For example async execution will not block UI thread, and Some TextBox.Text will be updated after download has finished private async void OnButtonClick() { SomeTextBox.Text = await new WebClient().DownloadStringTaskAsync("http://stackoverflow.com/"); }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183576", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28013/" ] }
183,645
I know pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer, reviews each line of code as it is typed in. But I just wonder the strategy still work in the case. For example if they have a very different programming skill level. if one never experience in the problem domain while another have. Is it still OK if they have low programming skill level? Could you suggest the pair programming strategy in the case above?
Assuming that the more experienced person in the pair has the temperament to mentor the other person, pairing someone with little experience in the language or the problem domain with an experienced person would facilitate knowledge transfer. The less experienced person would have a mentor to instruct them on the language, the domain, the application, and the best practices or conventions of the team. There's an interesting summary on C2 wiki about knowledge transfer using pair programming . The more senior person, who was brought on to serve as the team mentor, learned a lot from the junior programmers and his knowledge even increased as a result of being paired with more junior, less experienced software developers. There are some other stories about expert programmers being pair with domain experts, as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183645", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52934/" ] }
183,723
Most low latency/high frequency programming jobs (based on job specs) appear to be implemented on unix platforms. In a lot of the specs they make particular request for people with "low latency linux" type of experience. Assuming this does not mean a real-time linux OS, could people give me help with what this could be referring to? I know you can set CPU affinity to threads, but I am assuming they are asking for much more to it than that. Kernel tuning? (although I heard manufacturers like solarflare produce kernel bypass network cards anyway)? What about DMA or possibly shared memory between processes? If people could give me brief ideas I can go and do the research on google. (This question will probably require somebody familiar with High Frequency Trading)
I've done a fair amount of work supporting HFT groups in IB and Hedge Fund settings. I'm going to answer from the sysadmin view, but some of this is applicable to programming in such environments as well. There are a couple of things an employer is usually looking for when they refer to "Low Latency" support. Some of these are "raw speed" questions (do you know what type of 10g card to buy, and what slot to put it in?), but more of them are about the ways in which a High Frequency Trading environment differs from a traditional Unix environment. Some examples: Unix is traditionally tuned to support running a large number of processes without starving any of them for resources, but in an HFT environment, you are likely to want to run one application with an absolute minimum of overhead for context switching, and so on. As a classic small example, turning on hyperthreading on an Intel CPU allows more processes to run at once -- but has a significant performance impact on the speed at which each individual process is executed. As a programmer, you're likewise going to have to look at the cost of abstractions like threading and RPC, and figure out where a more monolithic solution -- while less clean -- will avoid overhead. TCP/IP is typically tuned to prevent connection drops and make efficient use of the bandwidth available. If your goal is to get the lowest latency possible out of a very fast link -- instead of to get the highest bandwidth possible out of a more constrained link -- you're going to want to adjust the tuning of the network stack. From a programming side, you'll likewise going to want to look at the available socket options, and figure out which ones have defaults more tuned for bandwidth and reliability than for reducing latency. As with networking, so with storage -- you're going to want to know how to tell a storage performance problem from an application problem, and learn what patterns of I/O usage are least likely to interfere with your program's performance (as an example, learn where the complexity of using asynchronous IO can pay off for you, and what the downsides are). Finally, and more painfully: we Unix admins want as much information on the state of the environments we monitor as possible, so we like to run tools like SNMP agents, active monitoring tools like Nagios, and data gathering tools like sar(1). In an environment where context switches need to be absolutely minimized and use of disk and network IO tightly controlled, though, we have to find the right tradeoff between the expense of monitoring and the bare-metal performance of the boxes monitored. Similarly, what techniques are you using that make coding easier but are costing you performance? Finally, there are other things that just come with time; tricks and details that you learn with experience. But these are more specialized (when do I use epoll? why do two models of HP server with theoretically identical PCIe controllers perform so differently?), more tied to whatever your specific shop is using, and more likely to change from one year to another.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183723", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41217/" ] }
183,725
I have been stuck for some time on which is the fastest string search algorithm, heard many opinions, but in the end I'm not sure. I have heard some people saying that the fastest algorithm is Boyer-Moore and some saying that Knuth-Morris-Pratt is actually faster. I have looked up for the complexity on both of them but they mostly look the same O(n+m) . I have found that in the worst case scenario Boyer-Moore has an O(nm) complexity compared to Knuth-Morris-Pratt which has O(m+2*n). Where n=length of text and m=length of pattern. As far as I know Boyer-Moore has a linear-worst case-time if I would use the Galil Rule. My question, Over all which is actually the fastest String search algorithm (This question includes all possible sting algorithms not just Boyer-Moore and Knuth-Morris-Pratt). Edit: Due to this answer What I'm exactly looking for is: Given a text T and a pattern P I have to find all the appearances of P in T . Also the length of P and T are from [1,2 000 000] and the program has to run under 0.15 sec. I know that KMP and Rabin-Karp are enough to get a 100% score on the problem but I for one wanted to try and implement Boyer-Moore. Which would be best for this type of pattern search?
It depends on the kind of search you want to perform. Each of the algorithms performs particularly well for certain types of a search, but you have not stated the context of your searches. Here are some typical thoughts on search types: Boyer-Moore: works by pre-analyzing the pattern and comparing from right-to-left. If a mismatch occurs, the initial analysis is used to determine how far the pattern can be shifted w.r.t. the text being searched. This works particularly well for long search patterns. In particular, it can be sub-linear, as you do not need to read every single character of your text. Knuth-Morris-Pratt: also pre-analyzes the pattern, but tries to re-use whatever was already matched in the initial part of the pattern to avoid having to rematch that. This can work quite well, if your alphabet is small (f.ex. DNA bases), as you get a higher chance that your search patterns contain reuseable subpatterns. Aho-Corasick: Needs a lot of preprocessing, but does so for a number of patterns. If you know you will be looking for the same search patterns over and over again, then this is much better than the other, because you need to analyse patterns only once, not once per search. Hence, as usual in CS, there is no definite answer to the overall best . It is rather a matter of choosing the right tool for the job at hand. Another note on your worst-case reasoning: Do consider the kinds of searches required to create that worst-case and think thoroughly about whether these are really relevant in your case. For example, the O(mn) worst-case complexity of the Boyer-Moore algorithm stems from a search pattern and a text that each use only one character (like finding aaa in aaaaaaaaaaaaaaaaaaaaa ) - do you really need to be fast for searches like that?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183725", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/78338/" ] }
183,760
Disclaimer : Not as exaggerated as the title suggests, but it still makes me uncomfortable. I'm just going to express honestly, so take it with a grain of salt. Just pretend that I'm talking about that coding standard that you don't like working with. Edit : The fact that I don't like it, doesn't mean I don't use it or enforce it. I decided to ask this question in the spirit of how to get over a standard you don't like, not to get help on how to better argue how it can be changed (although any comments regarding this last part is appreciated). Besides, I work in a big company and such a change of something that has lived for so long and that matters so little is unlikely. The standard is the opening-curly-brace-on-dedicated-line standard: somefunction() { //... } Instead of the *clearly superior* (note the joking/frustrated tone): somefunction() { //... } My personal arguments against the standard: It bloats code : extra unnecessary lines Harder to type : although probably this is just me struggling with the standard, I know one extra keystroke isn't that bad. Not easier to read : I start reading a function declaration, if statement, or any other scope-stacking statement and I already don't have to look for an opening brace. Nested blocks with this standard just make me angry for some reason. Used by people who come from a Microsoft IDE background : I think there should be an argumented reason (or more) behind a standard, not just take it in by paradigm. Their arguments (and my way of internally retorting to them): Easier to read because you can see where blocks start and end right away : I cannot understand this, what good is the block if you don't know what it is owned by, so then you have to read backwards. I used it in a Microsoft IDE and I liked it : Uhh... ok? It is in the standard : *cringes* Am I the only one that struggles with an opinionated stance against a specific standard?, how have you gotten over these?, what is your opinion on what this particular standard should be (just for fun)?
If you want to get over this — there was a quote by Torvalds: Bad programmers worry about the code. Good programmers worry about data structures and their relationships. Now consider, where does it put programmers who worry about such a minor thing like bracing style enforced by their code standard? Is your codebase otherwise so pristine that bracing is the only issue worth the time to argue about?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183760", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10869/" ] }
183,807
I've written a recursive search algorithm to find the boundaries of a voxel data structure in order to render it more efficiently. I've looked around, and either it's such a simple and obvious technique that nobody's bothered to patent it, or it's novel and nobody's done it this way before. It's openly "published" on GitHub and protected under the GPL. I'd like to show it to others, to see if it can be improved, however... I fear that although I've written and published it, someone may attempt to patent the same idea. Am I safe, protected by the banners of open source software, or must I attempt to protect myself like the big guns and patent trolls do? It's my belief that software patents are evil, and that in order for the best software to be written, many eyes need to see it. I'm worried this may be a rather naïve viewpoint on how software is written, though, and I'm curious as to what others think.
Disclaimer: I am not a lawyer. If you are concerned enough, seek professional legal advice. Assuming we are dealing with US law, it would be very difficult for someone to patent it now because the code on GitHub would be prior art. However, someone may have already filed a patent before you first published the work to GitHub. Make sure you keep any notes, source code or similar material if it significantly predates the GitHub work. I would not recommend looking for similar patents because they can be very difficult to read and, if you do find one and continue, your liability triples under US law. However, I would recommend searching for similar implementations outside patents as there may be existing prior art elsewhere. As someone whose professional work used to include reviewing patent applications and looking for prior art, if you do not find anything similar, I would guess you are not searching in the right places or using the correct terms. Also note that, even if someone else does patent it, they may not assert their right to prevent you using the invention. They would only do so if your use of the invention materially impacts their sales or otherwise made them more money than taking legal action against you. As mentioned above, seek professional advice if it concerns you. [Edit: Added the following.] Also remember that the GitHub code is only prior art for that exact implementation. There may be variations, alternatives or improvements, for example, so keeping notes or a diary for potentially patentable work is critical.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183807", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/78649/" ] }
183,842
I was reading a thread titled "strlen vs sizeof" on CodeGuru , and one of the replies states that "it's anyways [sic] bad practice to initialie [sic] a char array with a string literal." Is this true, or is that just his (albeit an "elite member") opinion? Here is the original question: #include <stdio.h> #include<string.h> main() { char string[] = "october"; strcpy(string, "september"); printf("the size of %s is %d and the length is %d\n\n", string, sizeof(string), strlen(string)); return 0; } right. the size should be the length plus 1 yes? this is the output the size of september is 8 and the length is 9 size should be 10 surely. its like its calculating the sizeof string before it is changed by strcpy but the length after. Is there something wrong with my syntax or what? Here is the reply : It's anyways bad practice to initialie a char array with a string literal. So always do one of the following: const char string1[] = "october"; char string2[20]; strcpy(string2, "september");
It's anyways bad practice to initialie a char array with a string literal. The author of that comment never really justifies it, and I find the statement puzzling. In C (and you've tagged this as C), that's pretty much the only way to initialize an array of char with a string value (initialization is different from assignment). You can write either char string[] = "october"; or char string[8] = "october"; or char string[MAX_MONTH_LENGTH] = "october"; In the first case, the size of the array is taken from the size of the initializer. String literals are stored as arrays of char with a terminating 0 byte, so the size of the array is 8 ('o', 'c', 't', 'o', 'b', 'e', 'r', 0). In the second two cases, the size of the array is specified as part of the declaration (8 and MAX_MONTH_LENGTH , whatever that happens to be). What you cannot do is write something like char string[]; string = "october"; or char string[8]; string = "october"; etc. In the first case, the declaration of string is incomplete because no array size has been specified and there's no initializer to take the size from. In both cases, the = won't work because a) an array expression such as string may not be the target of an assignment and b) the = operator isn't defined to copy the contents of one array to another anyway. By that same token, you can't write char string[] = foo; where foo is another array of char . This form of initialization will only work with string literals. EDIT I should amend this to say that you can also initialize arrays to hold a string with an array-style initializer, like char string[] = {'o', 'c', 't', 'o', 'b', 'e', 'r', 0}; or char string[] = {111, 99, 116, 111, 98, 101, 114, 0}; // assumes ASCII but it's easier on the eyes to use string literals. EDIT 2 In order to assign the contents of an array outside of a declaration, you would need to use either strcpy/strncpy (for 0-terminated strings) or memcpy (for any other type of array): if (sizeof string > strlen("october")) strcpy(string, "october"); or strncpy(string, "october", sizeof string); // only copies as many characters as will // fit in the target buffer; 0 terminator // may not be copied, but the buffer is // uselessly completely zeroed if the // string is shorter!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183842", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54480/" ] }
183,865
Here is a C++ class that gets constructed with three values. class Foo{ //Constructor Foo(std::string, int, char); private: std::string foo; char bar; int baz; }; All of the parameter types are different. I could overload the constructor so that order doesn't matter. class Foo{ //Constructors Foo(std::string, char, int); Foo(std::string, int, char); Foo(char, int, std::string); Foo(char, std::string, int); Foo(int, std::string, char); Foo(int, char, std::string); private: std::string foo; char bar; int baz; }; But is that a good idea? I started doing it because I knew what things a class/function needed; I didn't always remember what order it took them in. I've been assuming that the compiler optimizes this as if I called the same constructor. //compiler will implement this with the same code? //maybe not.. I could call a function to get a parameter, //and that function could change the state of the program, before calling //a function to get another parameter and the compiler would have to //implement both Foo foo1("hello",1,'a'); Foo foo2('z',0,"world"); What are your opinions on overloading a function so that the order doesn't matter? Also, If I'm writing some utility functions, Is it a good idea to provide different function names that do the same thing? eg. void Do_Foo(); void DoFoo(); void do_foo(); //etc.. I don't often see these two but similar conventions. Should I break or embrace the habit?
I could overload the constructor so that order [of the parameters] doesn't matter... But is that a good idea? No. Having different constructor overloads will have the opposite effect of what you are intending. The programmer coming after you expects different overloads to have different behavior, and will ask: "What sort of different behavior is being expressed by each of these overloads? Most programmers expect the discipline of having method parameters in a predefined order, and tools like IntelliSense will tell them the expected order of the parameters as they enter them. Having multiple function names that do the same thing is the same problem; programmers expect the variants to have different behavior. One function or method per behavior, please, and just adopt a consistent naming pattern.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183865", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63566/" ] }
183,881
Every time I look for an IDE (currently i'm tinkering with Go), I find a thread full of people recommending Vi, Emacs, Notepad++ etc. I've never done any development outside of an IDE; I guess I've been spoiled. How do you debug without an IDE? Are you limited to just logging?
By using a debugger. For the most part, this is also what an IDE does behind the scenes -- it just wraps the experience in a GUI. On Unix, one of the most commonly used debuggers is GNU gdb , which has largely supplanted the earlier Unix debuggers such as dbx . To get an idea of what debugging looks like / feels like from the command line, you can look at the gdb manual . As in other areas, using a debugger from the command line requires learning a syntax and a set of commands, but brings with it a lot of flexibility and scriptability. On the other hand, if you are already comfortable working in an editor such as vim or emacs, you may find that your favorite editor has a plug in for your favorite debugger.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/183881", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66745/" ] }
184,037
I frequently hear these two valuable programming practices: (1) lines of code should be 80 characters or less and (2) use descriptive names for variables, methods, classes, etc. I understand the reasoning for both of these words of advice, however, they seem to often be tradeoffs of each other. If I keep my code below 80 chars/line I end up using less descriptive names (esp. in Python in which each indentation counts as 4 characters) but if I use more descriptive names, I end up with lines over 80 characters. So, my question is which of these two pieces of advice is more important to abide by if the choice must be made? I am wondering this as an independent (hobbyist) programmer, but more importantly from the perspective of a software engineer working for a larger company.
Keep your indents few, your names descriptive, and don't be afraid to break a line. Keep your indents few. Often when I find myself in a struggle between indents vs descriptive naming, I take a step back and look at my indention level. If you're indenting further than 3 or 4 levels (2 levels is automatic and unavoidable in many instances. Read: class method definition), you may want to restructure your code, abstracting functionality out to a function or method. Your names descriptive You should always keep your names descriptive. Descriptive names create self-documenting code. You can try to shorten the names in some instances, but readability comes first. Don't be afraid to break a line Shit happens. If you go over 80 characters, and you don't see anyway to reclaim any space in the line - break it. Most languages don't care about line breaks, so break the line into multiple. Don't just pick a random location. Keep things grouped logically, and indent another level when you do break the line.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184037", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66324/" ] }
184,047
There's a project I'm working on that is looking to produce a web application that will manage a task list between multiple users. This is a master task list whose task items get distributed by an authorized user. Each user has his/her own account to login and view the tasks they are assigned; it is possible for multiple users to have a single task in-common. I am trying to leave details of the project out of this as I'm grappling more with the overall concept of how to handle the following situations, but if it helps, I'm using Java, EclipseLink, and GWT with RequestFactory implemented. The database is PostgreSQL. So the conceptual problems I am attempting to reconcile are the following: If a single task that is common to multiple users changes in any way e.g. task completed, deleted, etc., the task list of all the users who have this task will be updated. What design patterns are there that assist in implementing this function? Some patterns I have looked at are Observer and Mediator - are there others that should be considered over these? Say there are two users changing the same task at the same time. First, should I allow that situation to happen or should I put a lock on it until one or the other person is done making changes? Second, if I don't put a lock on it, how do I reconcile whose changes to accept? This involves the situation in 1 because user 1 could submit the data and before user 2 receives the updated data, he/she may have gone ahead and submitted his/her's changes. I am really looking for any guiding points, advice, or tips you can provide on how to properly sync data between multiple instances of this web app. I would very much appreciate it!
I think Whiteboard will be your pattern of choice for #1, you should post changes to tasks (or other shared data) in a common place, so all interested parties can see them and DTRT. For #2, you need to look at optimistic locking . Basically, you need to timestamp all your editable records with the last update time. When you try to save the record, you first verify that the record in the database has the same last-updated timestamp as your record. If not, then someone has updated the record and you now have to either get the updated record and inform the user that they need to enter their changes again, or you can try to merge the user's changes into the updated record (which usually turns out to be either simple or impossible).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184047", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41249/" ] }
184,089
It might be a strange question, but why there is no implication as a logical operator in many languages (Java, C, C++, Python Haskell - although as last one have user defined operators its trivial to add it)? I find logical implication much clearer to write (particularly in asserts or assert-like expressions) then negation with or: encrypt(buf, key, mode, iv = null) { assert (mode != ECB --> iv != null); assert (mode == ECB || iv != null); assert (implies(mode != ECB, iv != null)); // User-defined function }
It could be useful to have sometimes, no doubt. Several points argue against such an operator: The characters - and > are valuable, both in isolation and combined. Many languages already use them to mean other things. (And many can't use unicode character sets for their syntax.) Implication is very counter-intuitive, even to logic-minded people such as programmers. The fact that three out of the four truth table entries are true surprises many people. All other logical operators are symmetrical, which would make -> an exception to orthogonality. At the same time, there is an easy workaround: use operators ! and || . Implication is used vastly less often than logical and / or . This is probably why the operator is rare today. Certainly it could become more common, as new dynamic languages use Unicode for everything and operator overloading becomes more fashionable again.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184089", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/103/" ] }
184,107
My department specializes in converting customer data into our database schema so that they can use our software. Right now, we have C# applications that take an IDataReader (99% of the time it is a SqlDataReader ), perform some cleaning and mapping, insert it into a DataRow object, and then use a SqlBulkCopy to insert it in to our database. Sometimes (especially when the source database contains images as varbinary objects), this process can really bog down with a SQL transfer from the server to the app to just then turn right around and go back to the server. I feel that if we re-wrote some of the conversions as SSIS packages it could speed things up a lot. However the biggest stonewall I keep running into is when my boss, in Not Invented Here fashion, pushes back and says "What if Microsoft drops support for SSIS? We will have all this obsolete code and we will be screwed." This is not the first time that I have hit a "What if they remove that feature...?" answer from my boss. I don't have the time to write the conversion the old way, self-teach myself SSIS, and also write it the new way to demonstrate/test the benefits (None of us have used SSIS so there would be a period where we would have to learn how to use it). What should I do in this situation? Stop pushing the new technology? Wait till he leaves the department (I am the 2nd most Sr. person in the department after him, but it could be years before he quits/retires)? Find a new way of getting him to stop being afraid of 3rd party tools?
I'll take a crack at this from a managerial standpoint, but keep in mind that I'm aware I don't have all of the details. I'll summarize what I see: Mid-level developer, we'll call him "Scott", recommends a rewrite of legacy code into SSIS to improve the performance of important process ProcessA. ProcessA is currently behaving in a functioning state with no known major issues. ProcessA is written with proprietary tools using common or potentially tribal knowledge to maintain. Recommendation to rewrite will require new tools to support. Current staff has no documented experience/knowledge with the new tools. New tools are relatively recent replacements to older tools, and support for these tools may change reasonably within 4 business quarters. From this perspective, all I see is a large outlay of money on the company's part to improve a process that isn't broken . From a technical standpoint I can see the appeal, but when you get right down to it, it just doesn't make business sense to make this change. If you had staff on hand with documented experience with SSIS and benchmarks to show massive improvement to this process (keep in mind, massive improvement MUST equate to $$$), the outcome might be a little different. As it stands now, though, I'd have to agree with management (somewhere a tree just died). If you want to foster the adoption of SSIS and potentially lead into this refactor, you need to gain this experience and training with smaller, less important projects. Provide benchmarks and support for SSIS, and make sure that all of the infrastructure and support is in place before management will even consider making the change. Once you have the tool in use elsewhere, people on the team experienced with its usage, and a business "comfort" factor that support won't change and uproot everything, you will be more likely to sway someone to your viewpoint. Without those, you're barking up the wrong tree with that argument. As stupid as it sounds, sometimes the "best" way isn't the best way. Edit: In response to some updates to the question, I'll post a slight modification to my answer. If the process is experiencing distress of some kind, rewriting it will still be a costly venture. You may want to consider what the cost of fine-tuning the existing code would be against rewriting the package. Consider the impacts not just to software but to any human interfacing processes. When attempting to get management buy-in to a rewrite, it still will always come down to the money. Unless you can show that the current distress is costing money now or will become large in the aggregate, management will still not see the benefit. This cost may not necessarily be financial in nature. If the slow down compromises a system causing downtime, introduces intrusion vectors, or some other "hard to quantify" symptom, you may need to find a way to translate that problem into a monetary risk equivalent. An intrusion vector, for instance, may lead to an intrusion which could result in lost, stolen, or corrupt data. The company could lose reputation or may fail a necessary security audit. The key is to get the manager in question to recognize the quantifiable benefits of the change.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184107", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4274/" ] }
184,173
I have always wondered why Java does not do type inference given that the language is what it is, and its VM is very mature. Google's Go is an example of a language with excellent type inference and it reduces the amount of typing one has to do. Is there any special reason behind this feature not being a part of Java?
Technically speaking, Java does have type inferencing when using generics. With a generic method like public <T> T foo(T t) { return t; } The compiler will analyze and understand that when you write // String foo("bar"); // Integer foo(new Integer(42)); A String is going to be returned for the first call and an Integer for the second call based on what was input as an argument. You will get the proper compile-time checking as a result. Additionally, in Java 7, one can get some additional type inferencing when instantiating generics like so Map<String, String> foo = new HashMap<>(); Java is kind enough to fill in the blank angle brackets for us. Now why doesn't Java support type inferencing as a part of variable assignment? At one point, there was an RFE for type inferencing in variable declarations, but this was closed as "Will not fix" because Humans benefit from the redundancy of the type declaration in two ways. First, the redundant type serves as valuable documentation - readers do not have to search for the declaration of getMap() to find out what type it returns. Second, the redundancy allows the programmer to declare the intended type, and thereby benefit from a cross check performed by the compiler. The contributor who closed this also noted that it just feels "un-java-like", which I am one to agree with. Java's verbosity can be both a blessing and a curse, but it does make the language what it is. Of course that particular RFE was not the end of that conversation. During Java 7, this feature was again considered , with some test implementations being created, including one by James Gosling himself. Again, this feature was ultimately shot down. With the release of Java 8, we now get type inference as a part of lambdas as such: List<String> names = Arrays.asList("Tom", "Dick", "Harry"); Collections.sort(names, (first, second) -> first.compareTo(second)); The Java compiler is able to look at the method Collections#sort(List<T>, Comparator<? super T>) and then the interface of Comparator#compare(T o1, T o2) and determine that first and second should be a String thus allowing the programmer to forgo having to restate the type in the lambda expression.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184173", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55045/" ] }
184,337
I was just writing an if statement with fairly long property names and came occross this problem. Let say we have an if statement like this: if(_someViewModelNameThatIsLong.AnotherPropertyINeedToCheck == someValue && !_someViewModelNameThatIsLong.ThisIsABooleanPropertyThatIsImportant) { //Do something } The second property is of a boolean type and it makes no sense to have the stetement like if(boleanValue == true) Is there a better way to emphasize the negation then to put the ! in front. To me it seems like this can easily be overseen when reading the code and may potentialy cause problems with debuging
if(_someViewModelNameThatIsLong.NeedsMeToDoSomething(someValue)) { //Do something } And then, in the view model object public bool NeedsMeToDoSomething(string someValue) { return AnotherPropertyINeedToCheck == someValue && !ThisIsABooleanPropertyThatIsImportant; } (assuming someValue is a string and isn't known by the model object) This not only emphasises the ! operator, but it makes it more readable generally. Now, in the calling method, I can see one condition, which should be well-named to describe the condition in the context of the calling object. And in the model object, I can see what that means in the context of the model object.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184337", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36984/" ] }
184,365
I just read an interesting article called Getting too cute with c# yield return It made me wonder what the best way is to detect whether an IEnumerable is an actual enumerable collection, or if it's a state machine generated with the yield keyword. For example, you could modify DoubleXValue (from the article) to something like: private void DoubleXValue(IEnumerable<Point> points) { if(points is List<Point>) foreach (var point in points) point.X *= 2; else throw YouCantDoThatException(); } Question 1) Is there a better way to do this? Question 2) Is this something I should worry about when creating an API?
Your question, as I understand it, seems to be based on an incorrect premise. Let me see if I can reconstruct the reasoning: The linked-to article describes how automatically-generated sequences exhibit a "lazy" behaviour, and shows how this can lead to a counter-intuitive result. Therefore I can detect whether a given instance of IEnumerable is going to exhibit this lazy behaviour by checking to see if it is automatically generated. How do I do that? The problem is that the second premise is false. Even if you could detect whether or not a given IEnumerable was the result of an iterator block transformation (and yes, there are ways to do that) it wouldn't help because the assumption is wrong. Let's illustrate why. class M { public int P { get; set; } } class C { public static IEnumerable<M> S1() { for (int i = 0; i < 3; ++i) yield return new M { P = i }; } private static M[] ems = new M[] { new M { P = 0 }, new M { P = 1 }, new M { P = 2 } }; public static IEnumerable<M> S2() { for (int i = 0; i < 3; ++i) yield return ems[i]; } public static IEnumerable<M> S3() { return new M[] { new M { P = 0 }, new M { P = 1 }, new M { P = 2 } }; } private class X : IEnumerable<M> { public IEnumerator<X> GetEnumerator() { return new XEnum(); } // Omitted: non generic version private class XEnum : IEnumerator<X> { int i = 0; M current; public bool MoveNext() { current = new M() { P = i; } i += 1; return true; } public M Current { get { return current; } } // Omitted: other stuff. } } public static IEnumerable<M> S4() { return new X(); } public static void Add100(IEnumerable<M> items) { foreach(M item in items) item.P += 100; } } All right, we have four methods. S1 and S2 are automatically generated sequences; S3 and S4 are manually generated sequences. Now suppose we have: var items = C.Sn(); // S1, S2, S3, S4 S.Add100(items); Console.WriteLine(items.First().P); The result for S1 and S4 will be 0; every time you enumerate the sequence, you get a fresh reference to an M created. The result for S2 and S3 will be 100; every time you enumerate the sequence, you get the same reference to M you got the last time. Whether the sequence code is automatically generated or not is orthogonal to the question of whether the objects enumerated have referential identity or not. Those two properties -- automatic generation and referential identity -- actually have nothing to do with each other. The article you linked to conflates them somewhat. Unless a sequence provider is documented as always proffering up objects that have referential identity , it is unwise to assume that it does so.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184365", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66745/" ] }
184,464
I'm trying to figure out what the Anti-Corruption layer really means. I know that it's a way to transition/work around legacy code or bad APIs. What I don't understand is how it works and what makes it a clean separation from the undesirable layer. I've done some searching, but I can't find any simple examples or explanations, so I'm looking for someone who understands it and can explain it with simple examples. An answer that would satisfy my question should be simple (not necessarily short), and provide understandable examples of implementation and use. See this question , for my use case.
Imagine you have to use someone else's code that is designed as shown below: class Messy { String concat(String param, String str) { /* ... */ } boolean contains(String param, String s) { /* ... */ } boolean isEmpty(String param) { /* ... */ } boolean matches(String param, String regex) { /* ... */ } boolean startsWith(String param, String prefix) { /* ... */ } } Now imagine you find out that your code that depends on it looks like the following: String process(String param) { Messy messy = new Messy(); if (messy.contains(param, "whatever")) { return messy.concat(param, "-contains"); } if (messy.isEmpty(param)) { return messy.concat(param, "-empty"); } if (messy.matches(param, "[whatever]")) { return messy.concat(param, "-matches"); } if (messy.startsWith(param, "whatever")) { return messy.concat(param, "-startsWith"); } return messy.concat(param, "-whatever"); // WTF do I really need to repeat bloody "param" 9 times above? } ...and that you want to make it easier to use, in particular, to get rid of repetitive usage of parameters that just aren't needed for your application. Okay, so you start building an anti-corruption layer. First thing is to make sure that your "main code" doesn't refer to Messy directly. For example, you arrange dependency management in such a way that trying to access Messy fails to compile. Second, you create a dedicated "layer" module that is the only one accessing Messy and expose it to your "main code" in a way that makes better sense to you. Layer code would look like the following: class Reasonable { // anti-corruption layer String param; Messy messy = new Messy(); Reasonable(String param) { this.param = param; } String concat(String str) { return messy.concat(param, str); } boolean contains(String s) { return messy.contains(param, s); } boolean isEmpty() { return messy.isEmpty(param); } boolean matches(String regex) { return messy.matches(param, regex); } boolean startsWith(String prefix) { return messy.startsWith(param, prefix); } } As a result, your "main code" does not mess with Messy , using Reasonable instead, about as follows: String process(String param) { Reasonable reasonable = new Reasonable(param); // single use of "param" above and voila, you're free if (reasonable.contains("whatever")) { return reasonable.concat("-contains"); } if (reasonable.isEmpty()) { return reasonable.concat("-empty"); } if (reasonable.matches("[whatever]")) { return reasonable.concat("-matches"); } if (reasonable.startsWith("whatever")) { return reasonable.concat("-startsWith"); } return reasonable.concat("-whatever"); } Note there is still a bit of a mess messing with Messy but this is now hidden reasonably deep inside Reasonable , making your "main code" reasonably clean and free of corruption that would be brought there by direct usage of Messy stuff. Above example is based on how Anticorruption Layer is explained at c2 wiki: If your application needs to deal with a database or another application whose model is undesirable or inapplicable to the model you want within your own application, use an AnticorruptionLayer to translate to/from that model and yours. Note example is intentionally made simple and condensed to keep explanation brief. If you have a larger mess-of-API to cover behind the anti-corruption layer, same approach applies: first, make sure your "main code" doesn't access corrupted stuff directly and second, expose it in a way that is more convenient in your usage context. When "scaling" your layer beyond a simplified example above, take into account that making your API convenient is not necessarily a trivial task. Invest an effort to design your layer the right way , verify its intended use with unit tests etc. In other words, make sure that your API is indeed an improvement over one it hides, make sure that you don't just introduce another layer of corruption. For the sake of completeness, notice subtle but important difference between this and related patterns Adapter and Facade . As indicated by its name, anticorruption layer assumes that underlying API has quality issues (is "corrupted") and intends to offer a protection of mentioned issues. You can think of it this way: if you can justify that library designer would be better off exposing its functionality with Reasonable instead of Messy , this would mean you're working on anticorruption layer, doing their job, fixing their design mistakes. As opposed to that, Adapter and Facade do not make assumptions on the quality of underlying design. These could be applied to API that is well designed to start with, just adapting it for your specific needs. Actually, it could even be more productive to assume that patterns like Adapter and Facade expect underlying code to be well designed. You can think of it this way: well designed code shouldn't be too difficult to tweak for particular use case. If it turns out that design of your adapter takes more effort than expected, this could indicate that underlying code is, well, somehow "corrupted". In that case, you can consider splitting the job to separate phases: first, establish an anticorruption layer to present underlying API in a properly structured way and next, design your adapter / facade over that protection layer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184464", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53251/" ] }
184,480
I have a rails platform that I maintain. It has a lot of different web applications built on top of it. However now a client is asking for an API so that they can keep users on their site, but take advantage of some of the automated tasks we have. The platform is used to build insurance applications and allows for their purchase online, as well as providing ways to download documentation related to your policy. So my question when building the API is this: When I have to do a lot of things, like validate , create a user , user profile , and policy , pretty much at the same time. Should I make 4 separate API calls, and make the client build 4 calls on their side. OR should I have one call that excepts a lot of parameters, that validates the client and creates all 3 of those things at the same time, simplifying things for the client? The client, in this case, gets all of the required information at the same time, so its not like there is a natural flow in their application where it pauses and they can make an API call to my platform. Having been on the client side using many API's before, my gut is to make it as simple for the client as possible and have them make just one call. However this is leading to rather large functions in the API, which I'm not a fan of either. How do you suggest I tackle this? As a note, I am not very confident in the clients ability to implement a complicated API on their side.
How about doing both? Have a " low level " (so to speak) API that exposes functions of the system and have another "layer" that exposes services that a client might want to do. This layer would use the necessary low level API's required but those are still exposed if the client wants them. UPDATE: To also include some of the great points and comments made by others. Consider if the client is ever going to need to call one of the smaller API methods e.g. Is it feasible to call createUserProfile without also calling createUser? If not then don't expose that method. Thanks NoobsArePeople2 A simple but excellent point. Key point with exposing something in an API - you can't ever unexpose it. Expose the minimum necessary to function and then expand rather than exposing everything and... well, then its naked and making changes can be embarrassing and awkward . Thanks MichaelT
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184480", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45992/" ] }
184,574
It seems like, when reading something like this Wikipedia article about "pure functions" , they list Today() as an example of an impure function but it seems pretty pure to me. Is it because there is no formal input argument? Why is the actual time of day not treated as the "input to the function" in which case if you gave it the same input, i.e. executed today() twice at the same time, or traveled back in time to execute it again (maybe a hypothetical :) ), the output would be the same time. Today() never gives you a random number. it always gives you the time of day. The Wikipedia article says "different times it will yield different results" but that's like saying for different x sin(x) will give you different ratios. And sin(x) is their example of a pure function.
Is it because there is no formal input argument? It is because the output depends on something that is not an input, namely the current time. Why is the actual time of day not treated as the "input to the function" Because you didn't pass it as a parameter. If you did pass it in as a parameter, the function would become an identity function on dates, which is pretty useless. The entire point of a Today() function is to output something that depends on an external and constantly changing value (time). The advantage of pure functions is that their behavior is absolutely reproducible and deterministic, making it easy to have formal proofs and hard guarantees. They always do the same thing. Today() is pretty much the opposite: it always (allowing for time granularity) does something different.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184574", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33180/" ] }
184,654
My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is "apple". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, "Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you had to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?
Exceptions were invented to help make error handling easier with less code clutter. You should use them in cases when they make error handling easier with less code clutter. This "exceptions only for exceptional circumstances" business stems from a time when exception handling was deemed an unacceptable performance hit. That's no longer the case in the vast majority of code, but people still spout the rule without remembering the reason behind it. Especially in Java, which is maybe the most exception-loving language ever conceived, you shouldn't feel bad about using exceptions when it simplifies your code. In fact, Java's own Integer class doesn't have a means to check if a string is a valid integer without potentially throwing a NumberFormatException . Also, although you can't rely just on UI validation, keep in mind if your UI is designed properly, such as using a spinner for entering short numerical values, then a non-numerical value making it into the back end truly would be an exceptional condition.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184654", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56945/" ] }
184,834
We have reached the point in our project where we have almost a thousand tests and people have stopped bothering with running them before doing a check in because it takes so long. At best they run the tests that are relevant to the piece of code that they changed and at worst they simply check it in without testing. I believe this problem is due to the fact that the solution has grown to 120 projects (we usually do much smaller projects and this is only the second time we do TDD properly) and the build + test time has grown to about two-three minutes on the lesser machines. How do we lower the run time of the tests? Are there techniques? Faking more? Faking less? Maybe the bigger integration tests shouldn't run automatically when running all the tests? Edit: as a response to several of the answers, we already use CI and a build server, this is how i know the tests fail. The problem (actually a symptom) is we keep getting messages about failed builds. Running partial tests is something that most people do but not all. and regarding the tests, they are actually pretty well made, they use fakes for everything and there is no IO at all.
A possible solution would be to move the testing portion from the development machines to a continuous integration setup ( Jenkins for example) using version control software of some flavor ( git , svn , etc...). When new code has to be written the given developer will create a branch for whatever they are doing in the repository. All work will be done in this branch and they can commit their changes to the branch at any time without messing up the main line of code. When the given feature, bug fix, or whatever else they are working on has been completed that branch can be merged back into the trunk (or however you prefer to do it) where all unit tests are run. If a test fails the merge is rejected and the developer is notified so they can fix the errors. You can also have your CI server run the unit tests on each feature branch as commits are made. This way the developer can make some changes, commit the code, and let the server run the tests in the background while they continue to work on additional changes or other projects. A great guide to one way of doing such a setup can be found here (git specific but should work for other version control systems): http://nvie.com/posts/a-successful-git-branching-model/
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184834", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21598/" ] }
184,862
I have run into a corner-case problem with the general guidance of: nouns for variables verbs for functions Specifically, I have a case where the word is ambiguous - it can be either a verb or a noun. And in some cases when we're discussing the application, it will be used both ways in the same sentence. My intent is to make sure the program will remain readable to future developers as well as myself when I return to sections of code months later. One of the examples is with a battery . A battery has a charge and you can also charge() a battery. I think that having both Battery.Charge and Battery.Charge(value) will be confusing to future developers. My current solution is to simply pick a different word for one or both of those cases (the variable and the function). My problem with that approach is the Battery object's variable and function for charge won't align with design discussions involving the Battery . My question is if there is another / better way to handle this conflict in naming convention? Some additional reading on the subject. None really addressed the particular of my question. Meaningful concise method naming guidelines Naming conventions for variables The selected answer from https://softwareengineering.stackexchange.com/questions/14169/what-naming-guidelines-do-you-follow
In similar situations I try to find synonyms. In this case I would use "recharge" for the verb. The "re-" is slightly redundant, but the meaning is clear. Using the simple "charge" for the remaining charge in the battery is ambiguous because it doesn't specify any physical units. I would prefer "availableAmpHours", "hoursUntilRecharge" or something similar. The units will depend on whatever is convenient for the application. My personal preference is to use verbs only for functions that change state. I use nouns for non-mutating functions. I suppose it depends on your point of view. At the machine level, non-mutating functions do something, but at the model level, they don't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184862", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
184,874
Ken Thompson Hack (1984) Ken Thompson outlined a method for corrupting a compiler binary (and other compiled software, like a login script on a *nix system) in 1984. I was curious to know if modern compilation has addressed this security flaw or not. Short description: Re-write compiler code to contain 2 flaws: When compiling its own binary, the compiler must compile these flaws When compiling some other preselected code (login function) it must compile some arbitrary backdoor Thus, the compiler works normally - when it compiles a login script or similar, it can create a security backdoor, and when it compiles newer versions of itself in the future, it retains the previous flaws - and the flaws will only exist in the compiler binary so are extremely difficult to detect. Questions: I could not find any answers to these on the web: How does this relate to just-in-time compilation? Are functions like the program handling logins on a *nix system compiled when they are run? Is this still a valid threat, or have there been developments in the security of compilation since 1984 that prevent this from being a significant issue? Does this affect all languages? Why do I want to know? I came across this while doing some homework, and it seemed interesting but I lack the background to understand in a concrete way whether this is a current issue, or a solved issue. Reference Material Overview Some code
This hack has to be understood in context. It was published at a time and in a culture where Unix running on all kinds of different hardware was the dominant system. What made the attack so scary was that the C compiler was the central piece of software for these systems. Almost everything in the system went through the compiler when it was first installed (binary distributions were rare due to the heterogenous hardware). Everyone compiled stuff all the time. People regularly inspected source code (they often had to make adjustments to get it to compile at all), so having the compiler inject backdoors seemed to be a kind of "perfect crime" scenario where you could not be caught. Nowadays, hardware is much more compatible and compilers therefore have a much smaller role in the day-to-day operation of a system. A compromised compiler is not the most scary scenario anymore - rootkits and a compromised BIOS are even harder to detect and get rid of.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184874", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
184,909
People advise me to "find a job doing something that you are good at", but the truth is that I don't believe I am good with anything other than just writing code. I don't think that I can design or structure a program though. Don't get me wrong, I'm not the kind of programmer who can't write a fizz buzz program; I'm more like the kind of programmer who can't do much besides getting the job done. It's not that I have a "do the bare minimum" attitude; it's that I'm painfully aware of my limitations as a programmer. For example, implementing a quick sort algorithm is something that I could do only by relying on rote memorization (that is, if I ever find the motivation to spend a good chunk of my day trying to commit the entire algorithm to memory and not just go like "umm, I see, that makes sense, I can see the logic..."). When it comes to structuring and designing an application I feel I'm just as helpless. Since I am unable to see the right answer at the start I just make a judgement call which nine times out of ten I later regret. I still manage to obtain job promotions and praise from my coworkers, so others don't necessarily share my opinions. Objectively speaking though, I simply have neither the learning ability nor the sheer brain power to realistically aspire to be anything more than a 'slightly above average' programmer. I wonder whether I actually have an ethical duty to make room for more talented people and find myself another kind of job, even if I'm fairly confident that I would not find another role I am better qualified for. My question then is "Do average programmers have a place on a team?"
I wonder whether I actually have an ethical duty to make room for more talented people and find myself another kind of job No, you don't. If anything, you have an ethical duty to take care of yourself and your dependents. There's no shortage of jobs for talented developers, and there's no reason that you should put the interests of people that you probably don't even know ahead of your own. Indeed, this very site is full of questions from employers about how to find qualified developers. Your employer may legitimately feel lucky to have you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184909", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/67089/" ] }
184,932
I have a question regarding workflow in web development. I'm building my project in Symfony, and it's in Git. Right now, I have three environments, dev (local), staging and prod. The project itself is hosted on GitHub in a private repo. I'm wondering what a good way is to update the staging & production environments. Should I somehow setup a push based system so I can push the prod/staging branch directly to my server, and use Git hooks to regenerate cache files and run database migrations? Right now on similar projects I use a manual pull system, where I SSH to the server, pull down changes from the prod Git branch and manually run cache/migrations. This obviously is not ideal.
I wonder whether I actually have an ethical duty to make room for more talented people and find myself another kind of job No, you don't. If anything, you have an ethical duty to take care of yourself and your dependents. There's no shortage of jobs for talented developers, and there's no reason that you should put the interests of people that you probably don't even know ahead of your own. Indeed, this very site is full of questions from employers about how to find qualified developers. Your employer may legitimately feel lucky to have you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184932", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3382/" ] }
184,945
There must be a good reason why Java designers didn't allow any state to be defined in interfaces . Can you please throw some light on this aspect of design decision ?
An interface is a contract specifying what its implementer promises to be able to do. It does not need to specify state because state is an implementation detail and only serves to constrain implementers in how this contract is fulfilled. If you want to specify state you might want to rethink you use of interfaces and look at abstract base classes instead.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184945", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60189/" ] }
184,994
Empty glasses of water are arranged in the following order: When you pour liquid into the 1st glass if it's full, then the extra liquid would be flown into the glasses 2 and 3 in equal quantities. When glass 2 is full, the extra liquid would be flown into 4 and 5 and so on. Given an N liters of liquid and maximum capacity of each glass is 1 liter, give the amount of liquid present in any glass if you empty N liters of liquid by pouring into glass by filling out the function getWaterInBucket(int N, int X) where X is the glass number. So for example if I want have 4 liters at the beginning and I want to find the water in glass 3 the function is getWaterInBucket(4, 3) How do I solve this programmatically? I tried to find a math solution using Pascal's triangle. This did not work. I considered it to be a tree so I can add a parameter like this getWaterInBucket(BTree root, int N, int X) and then try some recursive solution for each level, but parameters are not allowed in this problem. Is there something obvious, some trick?
You just need to simulate the pouring, something like void pour(double glasses[10], int glass, double quantity) { glasses[glass] += quantity; if(glasses[glass] > 1.0) { double extra = glasses[glass] - 1.0; pour( glasses, left_glass(glass), extra / 2 ); pour( glasses, right_glass(glass), extra / 2 ); glasses[glass] = 1.0; } } double getWaterInGlass(int N, int X) { double glasses[10] = {0,0,0,0,0,0}; pour(glasses, 0, X); return glasses[N]; } As it stands, this is not a tree. Because different glasses pour into the same glasses, that prevents it from being a tree.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/184994", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57174/" ] }
185,046
I've recently been going through the Learn You a Haskell for Great Good guide and as practice I wanted to solve Project Euler Problem 5 with it, which specifies: What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20? I decided to first write a function determining whether a given number is divisible by these numbers: divisable x = all (\y -> x `mod` y == 0)[1..20] Then I calculated the smallest one using head : sm = head [x | x <- [1..], divisable x] And finally wrote the line to display the result: main = putStrLn $ show $ sm Unfortunately this took about 30 seconds to finish. Doing the same thing with the numbers 1 to 10 yields a result almost immediately, but then again the result is much smaller than the solution for 1 to 20. I solved it earlier in C and there the result for 1 to 20 was also calculated almost instantly. This leads me to believe I'm misunderstanding how to interpret this problem for Haskell. I looked through other people's solutions and found this: main = putStrLn $ show $ foldl1 lcm [1..20] Fair enough, this uses a built-in function, but why is the end result so much slower when doing it yourself? The tutorials out there tell you how to use Haskell, but I don't see much help with transforming algorithms into fast code.
First you need to make sure you have an optimized binary, before thinking the language is the problem. Read the Profiling and optimization chapter in Real Wolrd Haskell. It is worth noting that in most cases the high-level nature of the language costs you at least some of the performance. However, note that the other solution is not faster because it uses a built-in function, but simply because it utilizes a much faster algorithm : to find the least common multiple of a set of numbers you need to only find a few GCDs. Compare this with your solution, which cycles through all of the numbers from 1 to foldl lcm [1..20] . If you try with 30, the difference between the runtimes will be even greater. Take a look at complexities: your algorithm has O(ans*N) runtime, where ans is the answer and N is the number up to which you are checking for divisibility (20 in your case). The other algorithm executes N times lcm , however lcm(a,b) = a*b/gcd(a,b) , and GCD has complexity O(log(max(a,b))) . Therefore the second algorithm has complexity O(N*log(ans)) . You can judge for yourself which is faster. So, to summarize: Your problem is your algorithm, not the language. Note that there are specialized languages that are both functional and focused on math-heavy programs, like Mathematica, which for math-focused problems is probably faster than almost anything else. It has a very optimized library of functions, and it supports the functional paradigm (admittedly it also supports imperative programming).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185046", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57949/" ] }
185,104
I recently started learning to write code, and in my book I came across this question. "Why is a Boolean value stored as a byte inside of a computer when it only requires one bit?" can someone shed more light on this question?
It has to do with what the CPU can easily address. For example on an x86 processor there is an eax (32 bits), ax (16 bits) and a ah (8 bits) but no single bit register. So in order for it to use a single bit the CPU will have to do a read/modify/write to change the value. If it is stored as a byte a single read or write can be used to inspect/change the value. Additionally one might wonder if it would be better to use a single bit vs a full byte, after all a byte will be wasting 7 bits. Unless space is a constraint the one should go for the byte because, at least the x86 and I think others, there is usually an instructions to quickly set/clear a bool which is much quicker than the read/modify/write of a single bit. From personal measurements I have seen the read/mod/write method be 5x slower than the single instruction method.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185104", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
185,238
I'm not a C++ guy, but I'm forced to think about this. Why is multiple inheritance possible in C++, but not in C#? (I know of the diamond problem , but that's not what I'm asking here). How does C++ resolve the ambiguity of identical method signatures inherited from multiple base classes? And why is the same design not incorporated into C#?
Why is multiple inheritance possible in C++, but not in C#? I think (without having hard reference), that in Java they wanted to limit the expressiveness of the language to make the language easier to learn and because code using multiple inheritance is more often too complex for its own good than not. And because full multiple inheritance is a lot more complicated to implement, so it simplified the virtual machine a lot too (multiple inheritance interacts especially badly with garbage collector, because it requires keeping pointers into the middle of object (at the beginning of the base)) And when designing C# I think they looked at Java, saw that full multiple inheritance indeed wasn't missed much and elected to keep things simple as well. How does C++ resolve the ambiguity of identical method signatures inherited from multiple base classes? It does not . There is a syntax to call base class method from specific base explicitly, but there is no way to override only one of the virtual methods and if you don't override the method in the subclass, it's not possible to call it without specifying the base class. And why is the same design not incorporated into C#? There is nothing to incorporate. Since Giorgio mentioned interface extension methods in comments, I'll explain what mixins are and how they are implemented in various languages. Interfaces in Java and C# are limited to declaring methods only. But the methods have to be implemented in each class that inherits the interface. There is however large class of interfaces, where it would be useful to provide default implementations of some methods in terms of others. Common example is comparable (in pseudo-language): mixin IComparable { public bool operator<(IComparable r) = 0; public bool operator>(IComparable r) { return r < this; } public bool operator<=(IComparable r) { return !(r < this); } public bool operator>=(IComparable r) { return !(r > this); } public bool operator==(IComparable r) { return !(r < this) && !(r > this); } public bool operator!=(IComparable r) { return r < this || r > this; } }; Difference from full class is that this can't contain any data members. There are several options for implementing this. Obviously multiple inheritance is one. But multiple inheritance is rather complicated to implement. But it's not really needed here. Instead, many languages implement this by splitting the mixin in an interface, which is implemented by the class and a repository of method implementations, that are either injected into the class itself or an intermediate base class is generated and they are placed there. This is implemented in Ruby and D , will be implemented in Java 8 and can be implemented manually in C++ using the curiously recurring template pattern . The above, in CRTP form, looks like: template <typename Derived> class IComparable { const Derived &_d() const { return static_cast<const Derived &>(*this); } public: bool operator>(const IComparable &r) const { r._d() < _d(); } bool operator<=(const IComparable &r) const { !(r._d() < _d(); } ... }; and is used like: class Concrete : public IComparable<Concrete> { ... }; This does not require anything to be declared virtual as regular base class would, so if the interface is used in templates leaves useful optimization options open. Note, that in C++ this would probably still be inherited as second parent, but in languages that don't allow multiple inheritance it's inserted into the single inheritance chain, so it's more like template <typename Derived, typename Base> class IComparable : public Base { ... }; class Concrete : public IComparable<Concrete, Base> { ... }; The compiler implementation may or may not avoid the virtual dispatch. A different implementation was selected in C#. In C# the implementations are static methods of completely separate class and the method call syntax is appropriately interpreted by compiler if a method of given name does not exist, but an "extension method" is defined. This has the advantage that extension methods can be added to already compiled class and the disadvantage that such methods can't be overriden e.g. to provide optimized version.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185238", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/79921/" ] }
185,345
I have not personally come across a situation where I've needed to use WeakReference type in .Net, but the popular belief seems to be that it should be used in caches. Dr Jon Harrop gave a very good case against the use of WeakReferences in caches in his answer to this question. I've also often heard AS3 developers talk about using weak references to save on memory footprint but based on the conversations I've had it seems to add complexity without necessarily accomplishing the intended goal, and the runtime behaviour is rather unpredictable. So much so that many simply give up on it and instead manage the memory usage more carefully/optimize their code to be less memory intensive (or making the trade-off of more CPU-cycles and smaller memory footprint). Dr Jon Harrop also pointed out in his answer that the .Net weak references are not soft, and there's an aggressive collection of weak references at gen0. According to MSDN , long weak references gives you the potential to recreate an object, but the state of the object remains unpredictable. ! Given these characteristics, I can't think of a situation where weak references would be useful, perhaps someone could enlighten me?
I've found legitimate practical applications of weak references in the following three real-world scenarios that actually happened to me personally: Application 1: Event handlers You're an entrepreneur. Your company sells a spark lines control for WPF. Sales are great but support costs are killing you. Too many customers are complaining of CPU hogging and memory leaks when they scroll through screens full of spark lines. The problem is their app is creating new spark lines as they come into view but data binding is preventing the old ones from being garbage collected. What do you do? Introduce a weak reference between the data binding and your control so that data binding alone will no longer prevent your control from being garbage collected. Then add a finalizer to your control that tears down the data binding when it gets collected. Application 2: Mutable graphs You're the next John Carmack. You've invented an ingenius new graph-based representation of hierarchical subdivision surfaces that makes Tim Sweeney's games look like a Nintendo Wii. Obviously I'm not going to tell you exactly how it works but it all centers on this mutable graph where the neighbors of a vertex can be found in a Dictionary<Vertex, SortedSet<Vertex>> . The graph's topology keep changing as the player runs around. There's only one problem: your data structure is shedding unreachable subgraphs as it runs and you need to remove them or you'll leak memory. Luckily you're a genius so you know there is a class of algorithms specifically designed to locate and collect unreachable subgraphs: garbage collectors! You read Richard Jones' excellent monograph on the subject but it leaves you perplexed and concerned about your imminent deadline. What do you do? Simply by replacing your Dictionary with a weak hash table you can piggyback the existing GC and have it automatically collect your unreachable subgraphs for you! Back to leafing through Ferrari adverts. Application 3: Decorating trees You're hanging from the ceiling of a cyclindrical room at a keyboard. You've got 60 seconds to sift through some BIG DATA before someone finds you. You came prepared with a beautiful stream-based parser that relies upon the GC to collect fragments of AST after they've been analyzed. But you realise you need extra metadata on each AST Node and you need it fast. What do you do? You could use a Dictionary<Node, Metadata> to associate metadata with each node but, unless you clear it out, the strong references from the dictionary to old AST nodes will keep them alive and leak memory. The solution is a weak hash table which keeps only weak references to keys and garbage collects key-value bindings when the key becomes unreachable. Then, as AST nodes become unreachable they are garbage collected and their key-value binding is removed from the dictionary leaving the corresponding metadata unreachable so it too gets collected. Then all you have to do after your main loop has terminated is slide up back through the air vent remembering to replace it just as the security guard comes in. Note that in all three of these real-world applications that actually happened to me I wanted the GC to collect as aggressively as possible. That's why these are legitimate applications. Everybody else is wrong.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185345", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/79999/" ] }
185,380
As all of us know that after IPv4 it came IPv6. How this transition happened? I just want to know was there any IPv5 also? or there is any other logic in naming this version of IP as IPv6?
According to Wikipedia, Internet Protocol Version 5 was used by the Internet Stream Protocol , an experimental streaming protocol. The second version (of Internet Stream Protocol), known variously as ST-II or ST2, distinguishes its own packets with an Internet Protocol version number 5, although it was never known as IPv5. The Internet Stream Protocol family was never introduced for public use, but many of the concepts available in ST are similar to later Asynchronous Transfer Mode protocols and can be found in Multiprotocol Label Switching (MPLS). They also presaged Voice over IP.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185380", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80025/" ] }
185,406
Reading 21st Century C I arrived at chapter 6 at the section "Marking Exceptional Numeric Values with NaNs" , where it explains the use of the bits in the mantissa to store some arbitrary bit patterns, to use them as markers or pointers (the book mentions that WebKit uses this technique). I'm not really sure I understood the utility of this technique, that I see as an hack (it relies on the hardware not caring on the value of the mantissa in a NaN) but coming from a Java background I'm not used to the roughness of C. Here is the snippet of code that sets and reads a marker in a NaN #include <stdio.h> #include <math.h> //isnan double ref; double set_na(){ if (!ref) { ref=0/0.; char *cr = (char *)(&ref); cr[2]='a'; } return ref; } int is_na(double in){ if (!ref) return 0; //set_na was never called==>no NAs yet. char *cc = (char *)(&in); char *cr = (char *)(&ref); for (int i=0; i< sizeof(double); i++) if (cc[i] != cr[i]) return 0; return 1; } int main(){ double x = set_na(); double y = x; printf("Is x=set_na() NA? %i\n", is_na(x)); printf("Is x=set_na() NAN? %i\n", isnan(x)); printf("Is y=x NA? %i\n", is_na(y)); printf("Is 0/0 NA? %i\n", is_na(0/0.)); printf("Is 8 NA? %i\n", is_na(8)); } it prints: Is x=set_na() NA? 1 Is x=set_na() NAN? 1 Is y=x NA? 1 Is 0/0 NA? 0 Is 8 NA? 0 and at JSValue.h webkit explains the encoding, but not why it's used. What is the purpose of this technique? Are the benefits of space/performance high enough to balance its hackish nature?
When you are implementing a dynamically typed language, you've got to have a single type which can hold any of your objects. There are three different approaches I'm aware of for this: Firstly, you can pass around pointers. This is what the CPython implementation does. Every object is a PyObject pointer. These pointers get passed around and operations are performed by looking at details in the PyObject struct to figure out the type. The disadvantage is that small values like numbers get stored as boxed values, So your little 5 gets stored as a block of memory somewhere. So this leads us to the union approach, which is used by Lua. Instead of a PyObject* , each value is a struct which one field to specify the type, and then a union of all the different supported types. That way we avoid allocating any memory for small values, instead storing them directly in the union. The NaN approach stores everything as doubles, and reuses the unused portion of NaN for the extra storage. The advantage over the union method is that we save the type field. If it's a valid double, it's a double otherwise the mantissa is a pointer to the actual object. Remember, this is every javascript object. Every variable, every value in an object, every expression. If we can reduce all of those from 96 bits to 64 bits that is pretty impressive. Is it worth the hack? Recall that there is a lot of demand for efficient Javascript. Javascript is the bottleneck in many web applications, and so making it faster is a higher priority. It's reasonable to introduce a certain degree of hackiness for performance reasons. For most cases, it'd be a bad idea, because its introducing a degree of complexity for little gain. But in this specific case, it is worthwhile for memory and speed improvements.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185406", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25673/" ] }
185,411
I want to read Purely Functional Data Structures. I've easily found the thesis (which is freely available as a PDF) , but see that there's a book available also . So I'd like to know what the differences are, if any, between these two publications.
Here's a blog post by the author, where he says I thought that the basic organization of my dissertation was pretty solid, so mostly I was able to focus on adding and adjusting things to make it work better as a book. For example, I no longer had the constraint from my dissertation of having to focus on original work, so I was free to add data structures that had been developed by other people. and The main additions were expanded introductory material (such as my simplification of red-black trees, which was developed a few weeks after my thesis defense in a series of emails with Richard Bird), exercises, and an appendix including all the source code in Haskell (the main text used source code in Standard ML).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185411", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55383/" ] }
185,430
What is a hidden AJAX request? I've noticed an increase in the usage of hidden AJAX requests designed to make a user's action appear to happen immediately. I'll refer to this type of AJAX request as non-blocking. It's an AJAX request made without the user being aware it's happening, it's performed in the background and it's operation is silent ( there is no verbose to indicate a successful completion of the AJAX call ). The goal is to make the operation appear that it has happen immediately when it really has not finished. Here are examples of non-blocking AJAX request; User clicks delete on a collection of emails. The items disappear immediately from their inbox, and they can continue with other operations. Meanwhile, an AJAX request is processing the deletion of the items in the background. User fills out a form for new records. Clicks save. The new item appears in the list immediately. The user can continue to add new records. To clarify, here are examples of blocking AJAX request; User clicks delete on a collection of emails. An hourglass cursor appears. The AJAX request is made, and when it responds the hourglass cursor is turned off. The user has to wait a second for the operation to complete. User fills out a form for new records. Clicks save. The form turns grey with an AJAX loader animating. A message is shown "Your data was saved", and the new record appears in the list. The difference between the two above scenarios is that a non-blocking AJAX setup does not provide feedback of an operating performing, and a blocking AJAX setup does. The Risk Of Hidden AJAX Requests The biggest risk of this style of AJAX request is that the web application could be in a completely different state when the AJAX request fails. For example, a non-blocking example; User selects a bunch of emails. Clicks the delete button. The operation appears to happen immediately (the items just disappear from the list). The user then clicks the compose button and starts typing a new email. It's at this time that the JavaScript code discovers the AJAX request failed. The script could show an error message, but it really is pointless at this time. Alternately, a blocking example; User selects a bunch of emails. Clicks the delete button. Sees an hour glass, but the operation fails. They get an error message saying "error. blah blah blah". They are returned back to the list of emails, and they still have the emails they wanted to delete selected. They could attempt to delete them again. There are also other technical risks for performing non-blocking AJAX requests. The user could close the browser, could navigate to another website and they could navigate to another location in the current web that makes the context of any error response meaningless. So Why Is It Becoming So Popular? Facebook, Google, Microsoft, etc.. etc.. all these large domains are increasingly using non-blocking AJAX requests to make operations appear that they are performed instantly. I've also seen an increase in form editors that have no save or submit button . As soon as you leave a field or press enter. The value is saved. There is no your profile has been updated message or saving step. AJAX requests are not a certainty, and shouldn't be treated as successful until they have completed, but so many major web applications are operating just like that. Are these websites that use non-blocking AJAX calls to simulate responsive applications taking an unnecessary risk at the cost of appearing fast? Is this a design pattern that we should all be following in order to remain competitive?
It's not so much "fake" performance as real responsiveness. There are a number of reasons why it's popular: Internet connections are fairly reliable nowadays. The risk of an AJAX request failing is very low. The operations being performed are not really safety critical. If your emails don't get deleted on the server, the worst that happens is you have to delete them again the next time you come to the page. You can design your page to undo the action if the request fails, but you don't really have to be that subtle because it means your connection to the server is broken. It's easier to say "Connection lost. Unable to save recent changes. Try again later." You can design your page to only allow one pending AJAX request, so your state won't get too out of sync from the server. The browser warns you if you try to close or navigate away from a page while an AJAX request is pending.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185430", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52871/" ] }
185,585
I just learned about currying, and while I think I understand the concept, I'm not seeing any big advantage in using it. As a trivial example I use a function that adds two values (written in ML). The version without currying would be fun add(x, y) = x + y and would be called as add(3, 5) while the curried version is fun add x y = x + y (* short for val add = fn x => fn y=> x + y *) and would be called as add 3 5 It seems to me to be just syntactic sugar that removes one set of parentheses from defining and calling the function. I've seen currying listed as one of the important features of a functional languages, and I'm a bit underwhelmed by it at the moment. The concept of creating a chain of functions that consume each a single parameter, instead of a function that takes a tuple seems rather complicated to use for a simple change of syntax. Is the slightly simpler syntax the only motivation for currying, or am I missing some other advantages that are not obvious in my very simple example? Is currying just syntactic sugar?
With curried functions you get easier reuse of more abstract functions, since you get to specialize. Let's say that you have an adding function add x y = x + y and that you want to add 2 to every member of a list. In Haskell you would do this: map (add 2) [1, 2, 3] -- gives [3, 4, 5] -- actually one could just do: map (2+) [1, 2, 3], but that may be Haskell specific Here the syntax is lighter than if you had to create a function add2 add2 y = add 2 y map add2 [1, 2, 3] or if you had to make an anonymous lambda function: map (\y -> 2 + y) [1, 2, 3] It also allows you to abstract away from different implementations. Let's say you had two lookup functions. One from a list of key/value pairs and a key to a value and another from a map from keys to values and a key to a value, like this: lookup1 :: [(Key, Value)] -> Key -> Value -- or perhaps it should be Maybe Value lookup2 :: Map Key Value -> Key -> Value Then you could make a function that accepted a lookup function from Key to Value. You could pass it any of the above lookup function, partially applied with either a list or a map, respectively: myFunc :: (Key -> Value) -> ..... In conclusion: currying is good, because it lets you specialize/partially apply functions using a lightweight syntax and then pass these partially applied functions around to higher order function such as map or filter . Higher order functions (which take functions as parameters or yield them as results) are the bread and butter of functional programming, and currying and partially applied functions enable higher order functions to be used much more effectively and concisely.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185585", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7858/" ] }
185,636
Is it a good idea or a bad idea to create an interface for data transfer objects? Presuming that the object is usually mutable. Though my example is in Java, it should be applicable to any other language that has similar concepts. interface DataTransferObject { String getName(); void setName(String name); } class RealDataTransferObject implements DataTransferObject { String name; String getName() { return name; } void setName(String name) { this.name = name; } } Of course, this is a simplified example, in real life there may be more fields.
The general answer is no , because you should never add code without having a specific, concrete reason for it, and there is no general reason for such an interface. That being said, sometimes there can be a good reason. But in all cases I have seen, these interfaces were partial, covering only one or a few properties shared by multiple classes I wanted to use polymporphically without giving them a common superclass. Typical candidates are an Id property to use in some sort of registry or a Name property to display to the user. But it can be useful in any case where you want some code to handle everything that has an X - just create an XSource interface that contains the getX (and, only if required, the setX ) methods. But a separate interface for every model class, containing all the properties? I can't imagine a good reason to do that. A bad reason would be a badly designed framework that requires it; Entity EJBs did just that, if I remember correctly. Thankfully they were so bad they never gained much traction and are deprecated since EJB 3.0 Sidenote: please avoid using the term "value object" to describe Java beans with only trivial getters and setters - it conflicts with the more common definition of value object as something with no identity that is usually immutable. A better term would be DTO or model class - though in the latter case note that anemic domain models are considered an antipattern.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185636", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/42195/" ] }
185,719
Let's say you're writing a Yahtzee game TDD style. You want to test the part of the code that determines whether or not a set of five die rolls is a full house. As far as I know, when doing TDD, you follow these principles: Write tests first Write the simplest thing possible that works Refine and refactor So an initial test might look something like this: public void Returns_true_when_roll_is_full_house() { FullHouseTester sut = new FullHouseTester(); var actual = sut.IsFullHouse(1, 1, 1, 2, 2); Assert.IsTrue(actual); } When following the "Write the simplest thing possible that works", you should now write the IsFullHouse method like this: public bool IsFullHouse(int roll1, int roll2, int roll3, int roll4, int roll5) { if (roll1 == 1 && roll2 == 1 && roll3 == 1 && roll4 == 2 && roll5 == 2) { return true; } return false; } This results in a green test but the implementation is incomplete. Should you unit test every possible valid combination (both of values and positions) for a full house? That looks like the only way to be absolutely sure that your IsFullHouse code is completely tested and correct, but it also sounds quite insane to do that. How would you unit test something like this? Update Erik and Kilian point out that using literals in the initial implementation to get a green test might not be the best idea. I'd like to explain why I did that and that explanation does not fit in a comment. My practical experience with unit testing (especially using a TDD approach) is very limited. I remember watching a recording of Roy Osherove's TDD Masterclass on Tekpub. In one of the episodes he builds a String Calculator TDD style. The full specification of the String Calculator can be found here: http://osherove.com/tdd-kata-1/ He starts with a test like this: public void Add_with_empty_string_should_return_zero() { StringCalculator sut = new StringCalculator(); int result = sut.Add(""); Assert.AreEqual(0, result); } This results in this first implementation of the Add method: public int Add(string input) { return 0; } Then this test is added: public void Add_with_one_number_string_should_return_number() { StringCalculator sut = new StringCalculator(); int result = sut.Add("1"); Assert.AreEqual(1, result); } And the Add method is refactored: public int Add(string input) { if (input.Length == 0) { return 0; } return 1; } After each step Roy says "Write the simplest thing that will work". So I thought I would give this approach a try when trying to do a TDD-style Yahtzee game.
There are already lots of good answers to this question, and I've commented and upvoted several of them. Still, I'd like to add some thoughts. Flexibility isn't for novices The OP clearly states that he's not experienced with TDD, and I think a good answer must take that into account. In the terminology of the Dreyfus model of skill acquisition , he's probably a Novice . There's nothing wrong with being a novice - we are all novices when we start learning something new. However, what the Dreyfus model explains is that novices are characterized by rigid adherence to taught rules or plans no exercise of discretionary judgement That's not a description of a personality deficiency, so there's no reason to be ashamed of that - it's a stage we all need to go through in order to learn something new. This is also true for TDD. While I agree with many of the other answers here that TDD doesn't have to be dogmatic, and that it can sometimes be more beneficial to work in an alternative way, that doesn't help anyone just starting out. How can you exercise discretionary judgement when you have no experience? If a novice accepts the advice that sometimes it's OK not to do TDD, how can he or she determine when it's OK to skip doing TDD? With no experience or guidance, the only thing a novice can do is to skip out of TDD every time it becomes too difficult. That's human nature, but not a good way to learn. Listen to the tests Skipping out of TDD any time it becomes hard is to miss out of one of the most important benefits of TDD. Tests provide early feedback about the API of the SUT. If the test is hard to write, it's an important sign that the SUT is hard to use. This is the reason why one of the most important messages of GOOS is: listen to your tests! In the case of this question, my first reaction when seeing the proposed API of the Yahtzee game, and the discussion about combinatorics that can be found on this page, was that this is important feedback about the API. Does the API have to represent dice rolls as an ordered sequence of integers? To me, that smell of Primitive Obsession . That's why I was happy to see the answer from tallseth suggesting the introduction of a Roll class. I think that's an excellent suggestion. However, I think that some of the comments to that answer get it wrong. What TDD then suggests is that once you get the idea that a Roll class would be a good idea, you suspend work on the original SUT and start working on TDD'ing the Roll class. While I agree that TDD is more aimed at the 'happy path' than it's aimed at comprehensive testing, it still helps to break the system down into manageable units. A Roll class sounds like something you could TDD to completion much more easily. Then, once the Roll class is sufficiently evolved, would you go back to the original SUT and flesh it out in terms of Roll inputs. The suggestion of a Test Helper doesn't necessarily imply randomness - it's just a way to make the test more readable. Another way to approach and model input in terms of Roll instances would be to introduce a Test Data Builder . Red/Green/Refactor is a three-stage process While I agree with the general sentiment that (if you are sufficiently experienced in TDD), you don't need to stick to TDD rigorously, I think it's pretty poor advice in the case of a Yahtzee exercise. Although I don't know the details of the Yahtzee rules, I see no convincing argument here that you can't stick rigorously with the Red/Green/Refactor process and still arrive at a proper result. What most people here seem to forget is the third stage of the Red/Green/Refactor process. First you write the test. Then you write the simplest implementation that passes all tests. Then you refactor. It's here, in this third state, that you can bring all your professional skills to bear. This is where you are allowed to reflect on the code. However, I think it's a cop-out to state that you should only "Write the simplest thing possible that isn't completely braindead and obviously incorrect that works". If you (think you) know enough about the implementation on beforehand, then everything short of the complete solution is going to be obviously incorrect . As far as advice goes, then, this is pretty useless to a novice. What really should happen is that if you can make all tests pass with an obviously incorrect implementation, that's feedback that you should write another test . It's surprising how often doing that leads you towards an entirely different implementation than the one you had in mind first. Sometimes, the alternative that grows like that may turn out to be better than your original plan. Rigour is a learning tool It makes a lot of sense to stick with rigorous processes like Red/Green/Refactor as long as one is learning. It forces the learner to gain experience with TDD not just when it's easy, but also when it's hard. Only when you have mastered all the hard parts are you in a position to make an informed decision on when to deviate from the 'true' path. That's when you start forming your own path.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185719", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7289/" ] }
185,819
We have tried to introduce developer automated testing several times at my company. Our QA team uses Selenium to automate UI tests, but I always wanted to introduce unit tests and integration tests. In the past, each time we tried it, everyone got excited for the first month or two. Then, several months in, people simply stop doing it. A few observations and questions: Does automated testing actually work? Most of of my colleagues who used to work at other companies have tried and failed to implement an automated testing strategy. I still haven't seen a real-life software company that actually uses it and doesn't just talk about it. So many developers see automated testing as something that is great in theory but doesn't work in reality. Our business team would love developers to do it even at a cost of 30% extra time (at least they say so). But developers are skeptics. No one really knows how to properly do automated testing. Yes we have all read the unit testing examples on the internet, but using them for a big project is something else altogether. The main culprit is mocking/stubbing the database or anything else that is non-trivial. You end up spending more time mocking than writing actual tests. Then when it starts taking longer to write tests than code, that's when you give up. Are there any good examples of unit tests/system integration tests used in a complex data centric web applications? Any open source projects? Our application is data centric but also has plenty of domain logic. I tried the repository approach at some point and found it pretty good for unit testing, but it came at the price of being able to optimize data access easily and it added another layer of complexity. We have a big project undertaken by 20 experienced developers. This would seem to be an ideal environment to introduce unit testing/integration testing. Why doesn't it work for us? How did you make it work at your company?
The hardest part of doing unit testing is getting the discipline to write tests first / early. Most developers are used to just diving into code. It also slows down the development process early on as you are trying to figure out how to write a test for the code. However, as you get better at testing, this speeds up. And because of the writing tests, the initial quality of the code starts off higher. When starting out, try just to write tests. Don't worry so much about mocking/stubbing things in the beginning. Keep the tests simple. Tests are code and can/should be refactored. Though along those lines if something is hard to test, it could also be the design. TDD does drive towards using most design patterns (in my experience, particularly the Factory pattern). Make sure that the tests get a level of visibility. Integrate them in the release process, during code review ask about them. Any bugs found should get a test. These things are where TDD shines. Here are a couple of resources that I have found useful: http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf http://www.agitar.com/downloads/TheWayOfTestivus.pdf Edit: One thing to keep in mind when you are writing tests. You are not trying to specify anything about the implementation of the code, only the behavior. When you write code, you test it all the time. Trying to execute it with debug statements and so on. Writing tests formalizes this and provides a record of the tests that you have. That way you can check your functionality confidently without accidentally skipping a test case that you remembered halfway through the development process.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185819", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15950/" ] }
185,868
I sent out an email earlier reminding our developers that the use of the word "shall" in your derived requirements should not follow over to your functional requirements. When writing functional requirements the word "must" is used to describe the function a derived requirement must do. Derived = System Shall be requirement Functional = System must do requirement* It was sent back by one of our seniors that this was wrong and that "shall" should be used in every requirement. Am I wrong here, and should "shall" be used in every requirement? I haven't been able to find anything to back that up.
RFC 2119 "Key words for use in RFCs to Indicate Requirement Levels" goes into specifics of what different words on requirements mean. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. From this document: MUST is equivalent to REQUIRED and SHALL indicating that the definition is an absolute requirement. MUST NOT is equivalent to SHALL NOT and indicates that it is an absolute prohibition of the specs. SHOULD is equivalent to RECOMMENDED means that there are valid reasons to ignore a particular requirement, but the implications need to be weighed. SHOULD NOT and NOT RECOMMENDED means that a particular behavior may be acceptable or useful, but again, the implications need to be weighed. MAY means OPTIONAL and that the requirement is truly optional. Interoperability with different systems that may or may not implement an optional requirement must be done. Following this RFC SHOULD be done to help ensure consistency of communication between one's internal documents and the standards world at large.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185868", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80454/" ] }
185,923
One of my team members consistently avoids making comments in his code. His code is not self-documenting, and other programmers have a difficult time understanding his code. I have asked him several times to comment his code, however he just gives excuses or claims that he will do it later. His concern is that adding comments will take too much time and delay the projects. What argument can I present to him to convince him to properly document his code? On that note, am I wrong to focus on the code comments or is this indicative of a larger problem which should be addressed?
Comments alone don't make for better code, and just pushing for "more comments" is likely to give you little more than /* increment i by 1 */ style comments. So ask yourself why you want those comments. "It's best practice" does not count as an argument unless you understand why. Now, the most striking reason for using comments is so that the code is easier to understand, and when people complain about lack of comments, they are either clueless parrots, or they have a hard time understanding the code they're working with. So don't complain about missing comments: complain about unreadable code. Or better yet, don't complain, just keep asking questions about the code. For anything you don't understand, ask the person who wrote it. You should be doing that anyway; with unreadable code, you'll just ask more questions. And if you come back to a piece of code later, and you are unsure you remember correctly what it does, ask the same question again. If comments can fix it, and your colleague has a functioning brain, he/she will at some point realize that commenting the code is much easier than having you around asking stupid questions all the time. And if you can't come up with questions, then maybe that code is perfectly readable already, and it's you who is at fault - after all, not all code needs comments. On the people skills front, avoid sounding condescending or accusing at all cost; be serious and honest about your questions.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185923", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63715/" ] }