source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
246,762
First I want to say Java is the only language I ever used, so please excuse my ignorance on this subject. Dynamically typed languages allow you to put any value in any variable. So for example you could write the following function (psuedocode): void makeItBark(dog){ dog.bark(); } And you can pass inside it whatever value. As long as the value has a bark() method, the code will run. Otherwise, a runtime exception or something similar is thrown. (Please correct me if I'm wrong about this). Seemingly, this gives you flexibility. However, I did some reading on dynamic languages, and what people say is that when designing or writing code in a dynamic language, you think about types and take them into account, just as much as you would in a statically typed language. So for example when writing the makeItBark() function, you intent for it to only accept 'things that can bark', and you still need to make sure you only pass these kinds of things into it. The only difference is that now the compiler won't tell you when you made a mistake. Sure, there is one advantage to this approach which is that in static languages, to achieve the 'this function accepts anything that can bark', you'd need to implement an explicit Barker interface. Still, this seems like a minor advantage. Am I missing something? What am I actually gaining by using a dynamically typed language?
Dynamically-typed languages are uni-typed Comparing type systems , there's no advantage in dynamic typing. Dynamic typing is a special case of static typing - it's a statically-typed language where every variable has the same type. You could achieve the same thing in Java (minus conciseness) by making every variable be of type Object , and having "object" values be of type Map<String, Object> : void makeItBark(Object dog) { Map<String, Object> dogMap = (Map<String, Object>) dog; Runnable bark = (Runnable) dogMap.get("bark"); bark.run(); } So, even without reflection, you can achieve the same effect in just about any statically-typed language, syntactic convenience aside. You're not getting any additional expressive power; on the contrary, you have less expressive power because in a dynamically typed language, you're denied the ability to restrict variables to certain types. Making a duck bark in a statically-typed language Moreover, a good statically-typed language will allow you to write code that works with any type that has a bark operation. In Haskell, this is a type class: class Barkable a where bark :: a -> unit This expresses the constraint that for some type a to be considered Barkable, there must exist a bark function that takes a value of that type and returns nothing. You can then write generic functions in terms of the Barkable constraint: makeItBark :: Barkable a => a -> unit makeItBark barker = bark (barker) This says that makeItBark will work for any type satisfying Barkable 's requirements. This might seem similar to an interface in Java or C# but it has one big advantage - types don't have to specify up front which type classes they satisfy. I can say that type Duck is Barkable at any time, even if Duck is a third party type I didn't write. In fact, it doesn't matter that the writer of Duck didn't write a bark function - I can provide it after-the-fact when I tell the language that Duck satisfies Barkable : instance Barkable Duck where bark d = quack (punch (d)) makeItBark (aDuck) This says that Duck s can bark, and their bark function is implemented by punching the duck before making it quack. With that out of the way, we can call makeItBark on ducks. Standard ML and OCaml are even more flexible in that you can satisfy the same type class in more than one way. In these languages I can say that integers can be ordered using the conventional ordering and then turn around and say they're also orderable by divisibility (e.g. 10 > 5 because 10 is divisible by 5). In Haskell you can only instantiate a type class once. (This allows Haskell to automatically know that it's ok to call bark on a duck; in SML or OCaml you have to be explicit about which bark function you want, because there might be more than one.) Conciseness Of course, there's syntactical differences. The Python code you presented is far more concise than the Java equivalent I wrote. In practice, that conciseness is a big part of the allure of dynamically-typed languages. But type inference allows you to write code that's just as concise in statically-typed languages, by relieving you of having to explicitly write the types of every variable. A statically-typed language can also provide native support for dynamic typing, removing the verbosity of all the casting and map manipulations (e.g. C#'s dynamic ). Correct but ill-typed programs To be fair, static typing necessarily rules out some programs that are technically correct even though the type checker can't verify it. For example: if this_variable_is_always_true: return "some string" else: return 6 Most statically-typed languages would reject this if statement, even though the else branch will never occur. In practice it seems no one makes use of this type of code - anything too clever for the type checker will probably make future maintainers of your code curse you and your next of kin. Case in point, someone successfully translated 4 open source Python projects into Haskell which means they weren't doing anything that a good statically-typed language couldn't compile. What's more, the compiler found a couple of type-related bugs that the unit tests weren't catching. The strongest argument I've seen for dynamic typing is Lisp's macros, since they allow you to arbitrarily extend the language's syntax. However, Typed Racket is a statically-typed dialect of Lisp that has macros, so it seems static typing and macros are not mutually exclusive, though perhaps harder to implement simultaneously. Apples and Oranges Finally, don't forget that there's bigger differences in languages than just their type system. Prior to Java 8, doing any kind of functional programming in Java was practically impossible; a simple lambda would require 4 lines of boilerplate anonymous class code. Java also has no support for collection literals (e.g. [1, 2, 3] ). There can also be differences in the quality and availability of tooling (IDEs, debuggers), libraries, and community support. When someone claimed to be more productive in Python or Ruby than Java, that feature disparity needs to be taken into account. There's a difference between comparing languages with all batteries included , language cores and type systems .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246762", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
246,763
Please note that this isn't meant to be a Java vs. C# argument. I'm a Java programmer with no C# experience, asking just out of curiosity. I did some reading on C#, and it seems it has much more features than Java. A number of examples: Type inference. dynamic keyword. Delegates. Optional parameters. Lambda and LINQ (I actually have no idea what these are). Properties. However Java doesn't really feature anything that C# doesn't have. My question is: why does C# have much more native features than Java? And why didn't Java add some of these throughout the years, for example Properties or type inference? Do the Java language designers take a more simplistic approach? What is the reason for this?
Several reasons: C# came later than Java; version 1 was a blatant rip-off of Java 1.4, so it pretty much had everything Java had at that point. But then C# developed much faster than Java, because it was an exciting new platform (and had an utterly brilliant driver in Anders Hejlsberg, the father of Turbo Pascal). That allowed them to avoid all the mistakes in Java that had become obvious, while adding everything that Java practitioners wished they had. Meanwhile, Java was hampered by very strict backward compatibility goals and by a somewhat slower pace of development, partly because it tried desperately to gain a reputation for being the standard, enterprisey, reliable, non-surprising solution for the 95% of non-genius programmers. At this they succeeded, perhaps a bit too well. The result is that Java now has a bit of a feature gap. It does have huge plans for the future, but as usual with this sort of thing everything takes a bit longer than planned.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246763", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
246,793
You can find an endless list of blogs, articles and websites promoting the benefits of unit testing your source code. It's almost guaranteed that the developers who programmed the compilers for Java, C++, C# and other typed languages used unit testing to verify their work. So why then, despite its popularity, is testing absent from the syntax of these languages? Microsoft introduced LINQ to C# , so why couldn't they also add testing? I'm not looking to predict what those language changes would be, but to clarify why they are absent to begin with. As an example: We know that you can write a for loop without the syntax of the for statement. You could use while or if / goto statements. Someone decided a for statement was more efficient and introduced it into a language. Why hasn't testing followed the same evolution of programming languages?
As with many things, unit testing is best supported at the library level, not the language level. In particular, C# has numerous Unit Testing libraries available, as well as things that are native to the .NET Framework like Microsoft.VisualStudio.TestTools.UnitTesting . Each Unit Testing library has a somewhat different testing philosophy and syntax. All things being equal, more choices are better than less. If unit testing were baked into the language, you'd either be locked into the language designer's choices, or you'd be using... a library, and avoiding the language testing features altogether. Examples Nunit - General purpose, idiomatically-designed unit testing framework that takes full advantage of C#'s language features. Moq - Mocking framework that takes full advantage of lambda expressions and expression trees, without a record/playback metaphor. There are many other choices. Libraries like Microsoft Fakes can create "shims..." mocks that don't require you to write your classes using interfaces or virtual methods. Linq isn't a language feature (despite it's name) Linq is a library feature. We got a lot of new features in the C# language itself for free, like lambda expressions and extension methods, but the actual implementation of Linq is in the .NET Framework. There's some syntactic sugar that was added to C# to make linq statements cleaner, but that sugar is not required to use linq.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246793", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52871/" ] }
246,917
There are some good examples of well-documented code out there, such as java api. But, a lot of code in public projects such as git and internal projects of companies is poorly documented and not very newcomer friendly. In all my software development stints, I have had to deal with poorly documented code. I noticed the following things - Little or no comments in code. Method and variable names are not self describing. There is little or no documentation for how the code fits into the system or business processes. Hiring bad developers or not mentoring the good ones . They can't write simple and clean code. Hence its difficult or impossible for anyone, including the developer to document the code. As a result, I have had to go through a lot of code and talk to many people to learn things. I feel this wastes everyone's time. It also creates the need for KT/Knowledge transfer sessions for newcomers to a project. I learned that documentation is not given the attention it deserves because of the following reasons: Laziness. Developers don't like to do anything but code. Job security. (If no one can understand your code easily, then you might not be easily replaceable.) Difficult deadlines leave little time to document. So, I am wondering if there is a way to encourage and enforce good documentation practices in a company or project. What are the strategies to be used for creating decent documentation for the systems and code of any project, regardless of its complexity ? Are there any good examples of when minimal or no documentation is needed ? IMHO, I feel that we should have a documentation review after a project is delivered. If it is not simple, concise, illustrative and user friendly, the developer or technical documentation engineer own the responsibility for it and be made to fix it. I neither expect people to make reams of documentation, not hope that it will be user friendly like the head first books, but I expect it to eliminate the need for hours of analysis and wasteful KT sessions. Is there a way to end or alleviate this madness ? "Document driven development" perhaps ?
How to document code? You already have a hint: look at how Java API is documented. More generally, there is no unique set of rules which apply to every project. When I work on business-critical large-scale projects, the documentation has nothing to do with the one I would write for a small open source library, which, in turn, has nothing to do with the documentation of my medium-scale personal project. Why many open source projects are not documented well? Because most open source projects are made by people who contribute to those projects because it's fun. Most programmers and developers consider that writing documentation is not fun enough to be done for free. Why many closed-source projects are not documented well? Because it costs a huge amount of money to (1) write good documentation and to (2) maintain it. The immediate cost (cost of writing the documentation) is clearly visible to the stakeholders: if your team asks to spend the next two months documenting the project, it's two additional months of salary to pay. The long term cost (cost of maintaining the documentation) becomes noticeable pretty easy too to the managers, and is often the first target when they must lower the cost or shorten the delays. This causes an additional problem of outdated documentation which quickly becomes useless, and is extremely expensive to update. The long term savings (savings from not having to waste a few days exploring the legacy code just to understand a basic thing which should have been documented years ago) are, on the other hand, difficult to measure, which confirms the feeling of some managers that writing and maintaining documentation is a waste of time. What I often observe is that: At the beginning, the team is willing to document a lot. Over time, pressure of deadlines and lack of interest make it more and more difficult to maintain the documentation. A few months later, a new person who joins the project practically can't use the documentation, because it doesn't correspond at all to the actual system. Noticing that, management blames developers for not maintaining the documentation; developers ask to spend a few weeks updating it. If management grants a few weeks for that, the cycle repeats. If management refuses, based on previous experience, it only increases the bad experience, since the product lacks documentation, but a few months were spent writing and maintaining it. Documentation should be a continuous process, just like testing. Spend a week simply coding a few thousands of LOC, and getting back to tests and documentation is very, very painful. How to encourage the team to write documentation? Similarly to the ways to encourage people to write clean code, to do regular refactoring, to use design patterns or to add enough unit tests. Lead by example. If you write good documentation, your pairs might start doing it too. Do systematic code reviews, including formal code reviews targeted at inspecting the documentation. If some members of the team are particularly antipathetic to good documentation (or documentation at all), discuss the subject with them privately, to understand what are the impediments which prevent them from writing better documentation. If they blame the lack of time, you see the source of the problems. Make the presence or the lack of documentation measurable for a few weeks or months, but don't focus on that. For example, you may measure the number of lines of comments per LOC, but don't make it a permanent measure, otherwise, developers will start writing long but meaningless comments just to get rid of low scores. Use gamification. This comes together with the previous point. Use positive/negative reinforcement . (See the comment by SJuan76 ) Treat the lack of comments as errors. For example, in Visual Studio, you can check an option to generate XML documentation. If you also check that all warnings are treated as errors, the lack of a comment at the top of a class or a method will halt the compilation. As for the three previous points, this one should be used with caution. I used it for a while with a particularly tough team of beginner programmers, and it ended up with StyleCop-compliant comments like that: /// <summary> /// Gets or sets the PrimaryHandling. /// </summary> public Workflow PrimaryHandling { get; set; } which were, hm..., not particularly helpful. Remember: nothing automated can help you pinpoint bad comments when programmers wants to screw with you . Only code reviews and other human tasks will help. Are there any good examples of when minimal or no documentation is needed? Documentation explaining the architecture and the design is not needed: For a prototype, For a personal project written in a few hours to accomplish a task, while being pretty sure this project won't be maintained any longer, For any project where it is obvious, given the small size of it, coupled with the particularly clean code, that you will spend more time writing documentation than all the future maintainers exploring the code. In-code documentation (code comments) is not needed according to some developers when the code is self-documenting. For them, the presence of comments is, except in the rare cases, not a good sign, but a sign that the code wasn't refactored enough to be clear without the need for comments. I feel that we should have a documentation review after a project is delivered. If your project is delivered at least once per week, it's the way to go. If your project is not agile and is delivered at the intervals of six months, then do more regular reviews.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/246917", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/129904/" ] }
247,084
I'm an inexperienced programmer. I have been assigned to develop a Firefox plugin with no prior experience. So I followed the tutorial at MDN. I learned so many things, it's exciting and overwhelming at the same time. When I finally started programming, I ended up using extension the tutorial gave me, and modify it. And after realizing there are tons of code out there which pretty much does everything my extension has to do, I ended up analyzing these codes and implant them to mine with some modifications to suit my own needs.... So yeah, basically what I'm asking is: Will spending a lot of your time observing people's code and modifying them instead making my own improve my skill as a programmer in general?
Reading other people's code is in fact a very good habit, since it's the best way to understand what's out there and what the programmer community will presumably be familiar with. Your code has to be understandable by you and by everyone else who will ever have to maintain it, so it's important to acquire an understanding of what is and isn't readable - even if you read really bad code it still serves as a counter-example. As for reusing vs. implementing your own: reading other people's code and copying it without understanding, in the hope that it may fulfill your requirements, is bad, because it doesn't result in you learning something. But detecting existing code that does do what you want and reusing it is good, because it's more efficient than writing your own version, and by finding it and determining that it does in fact do the desired thing, you have proven that you could, in principle, rewrite the solution on your own if you had to. The result is that you save time and don't miss out on learning. In fact, researching existing solutions may even be the critical point that teaches you how to do something, so that you learn something new and save time simultaneously. The litmus test for copying someone else's code is this: do I understand what the code does and reuse it in order to save time, or am I simply throwing one version after another at the problem until I hit on something that works?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/247084", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139829/" ] }
247,183
I'm trying to determine the technical details of why software produced using programming languages for certain operating systems only work with them. It is my understanding that binaries are specific to certain processors due to the processor specific machine language they understand and the differing instruction sets between different processors. But where does the operating system specificity come from? I used to assume it was APIs provided by the OS but then I saw this diagram in a book: Operating Systems - Internals and Design Principles 7th ed - W. Stallings (Pearson, 2012) As you can see, APIs are not indicated as a part of the operating system. If for example I build a simple program in C using the following code: #include<stdio.h> main() { printf("Hello World"); } Is the compiler doing anything OS specific when compiling this?
You mention on how if the code is specific to a CPU, why must it be specific also to an OS. This is actually more of an interesting question that many of the answers here have assumed. CPU Security Model The first program run on most CPU architectures runs inside what is called the inner ring or ring 0 . How a specific CPU arch implements rings varies, but it stands that nearly every modern CPU has at least 2 modes of operation, one which is privileged and runs 'bare metal' code which can perform any legal operation the CPU can perform and the other is untrusted and runs protected code which can only perform a defined safe set of capabilities. Some CPUs have far higher granularity however and in order to use VMs securely at least 1 or 2 extra rings are needed (often labelled with negative numbers) however this is beyond the scope of this answer. Where the OS comes in Early single tasking OSes In very early DOS and other early single tasking based systems all code was run in the inner ring, every program you ever ran had full power over the whole computer and could do literally anything if it misbehaved including erasing all your data or even doing hardware damage in a few extreme cases such as setting invalid display modes on very old display screens, worse, this could be caused by simply buggy code with no malice whatsoever. This code was in fact largely OS agnostic, as long as you had a loader capable of loading the program into memory (pretty simple for early binary formats) and the code did not rely on any drivers, implementing all hardware access itself it should run under any OS as long as it is run in ring 0. Note, a very simple OS like this is usually called a monitor if it is simply used to run other programs and offers no additional functionality. Modern multi tasking OSes More modern operating systems including UNIX , versions of Windows starting with NT and various other now obscure OSes decided to improve on this situation, users wanted additional features such as multitasking so they could run more than one application at once and protection, so a bug (or malicious code) in an application could no longer cause unlimited damage to the machine and data. This was done using the rings mentioned above, the OS would take the sole place running in ring 0 and applications would run in the outer untrusted rings, only able to perform a restricted set of operations which the OS allowed. However this increased utility and protection came at a cost, programs now had to work with the OS to perform tasks they were not allowed to do themselves, they could no longer for example take direct control over the hard disk by accessing its memory and change arbitrary data, instead they had to ask the OS to perform these tasks for them so that it could check that they were allowed to perform the operation, not changing files that did not belong to them, it would also check that the operation was indeed valid and would not leave the hardware in an undefined state. Each OS decided on a different implementation for these protections, partially based on the architecture the OS was designed for and partially based around the design and principles of the OS in question, UNIX for example put focus on machines being good for multi user use and focused the available features for this while windows was designed to be simpler, to run on slower hardware with a single user. The way user-space programs also talk to the OS is completely different on X86 as it would be on ARM or MIPS for example, forcing a multi-platform OS to make decisions based around the need to work on the hardware it is targeted for. These OS specific interactions are usually called "system calls" and encompass how a user space program interacts with the hardware through the OS completely, they fundamentally differ based on the function of the OS and thus a program that does its work through system calls needs to be OS specific. The Program Loader In addition to system calls, each OS provides a different method to load a program from the secondary storage medium and into memory , in order to be loadable by a specific OS the program must contain a special header which describes to the OS how it may be loaded and run. This header used to be simple enough that writing a loader for a different format was almost trivial, however with modern formats such as elf which support advanced features such as dynamic linking and weak declarations it is now near impossible for an OS to attempt to load binaries which were not designed for it, this means, even if there were not the system call incompatibilities it is immensely difficult to even place a program in ram in a way in which it can be run. Libraries Programs rarely use system calls directly however, they almost exclusively gain their functionality though libraries which wrap the system calls in a slightly friendlier format for the programming language, for example, C has the C Standard Library and glibc under Linux and similar and win32 libs under Windows NT and above, most other programming languages also have similar libraries which wrap system functionality in an appropriate way. These libraries can to some degree even overcome the cross platform issues as described above, there are a range of libraries which are designed around providing a uniform platform to applications while internally managing calls to a wide range of OSes such as SDL , this means that though programs cannot be binary compatible, programs which use these libraries can have common source between platforms, making porting as simple as recompiling. Exceptions to the Above Despite all I have said here, there have been attempts to overcome the limitations of not being able to run programs on more than one operating system. Some good examples are the Wine project which has successfully emulated both the win32 program loader, binary format and system libraries allowing Windows programs to run on various UNIXes. There is also a compatibility layer allowing several BSD UNIX operating systems to run Linux software and of course Apple's own shim allowing one to run old MacOS software under MacOS X. However these projects work through enormous levels of manual development effort. Depending on how different the two OSes are the difficulty ranges from a fairly small shim to near complete emulation of the other OS which is often more complex than writing an entire operating system in itself and so this is the exception and not the rule.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/247183", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139929/" ] }
247,203
Having looked at some languages for functional programming, I always wondered why some fp-languages use one or more whitespace characters for function application (and definition), whereas most (all?) imperative/object-oriented languages are using parentheses, which seems to be the more mathematical way. I also think that the latter style is much more clear and readable than without the parens. So if we have a function f(x) = x² there are the two alternatives to calling it: FP: f x Examples: ML, Ocaml, F# Haskell LISP, Scheme (somehow) Non-FP: f(x) Examples: Almost all imperative languages (I know, see the comments/answers) Erlang Scala (also allows "operator notation" for single arguments) What are the reasons for "leaving out" the parentheses?
which seems to be the more mathematical way functional languages are inspired by lambda calculus . In this field, parentheses are not used for function application. I also think that the latter style is much more clear and readable than without the parens. Readability is in the eye of the beholder. You are not used to reading it. It is a bit like mathematical operators. If you understand the associativity, you only need a few parens to clarify the structure of your expression. Often you don't need them. Currying is also a good reason to use this convention. In haskell, you can define the following: add :: Int -> Int -> Int add x y = x + y x = add 5 6 -- x == 11 f = add 5 y = f 6 -- y == 11 z = ((add 5) 6) -- explicit parentheses; z == 11 With parens, you could use two convention: f(5, 6) (not curried) or f(5)(6) (curried). The haskell syntax helps to get used to the currying concept. You can still use a non-curried version, but it is more painful to use it with combinators add' :: (Int, Int) -> Int add' (x, y) = x + y u = add'(5, 6) -- just like other languages l = [1, 2, 3] l1 = map (add 5) l -- [6, 7, 8] l2 = map (\x -> add'(5, x)) l -- like other languages Notice how the second version forces you to register x as a variable, and that the subexpression is a function which takes an integer and adds 5 to it? The curried version is much lighter, but also considered by many as more readable. Haskell programs makes extensive use of partial application and combinators as a mean of defining and composing abstractions, so this is not a toy example. A good function interface will be one where the order of parameters provides a friendly curried usage. Another point: a function without parameters should be called with f() . In haskell, since you only manipulate immutable lazy evaluated values, you just write f , and consider it as a value which will need to perform some computations when needed. Since its evaluation won't have any side effect, it makes no sense to have a different notation for the parameterless function and its returned value. There are also other conventions for function application: Lisp: (f x) -- prefix with external parentheses Forth: x f -- postfix
{ "source": [ "https://softwareengineering.stackexchange.com/questions/247203", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73396/" ] }
247,267
Maybe a little tongue-in-cheek, but as I can't find this answer anywhere through Google, so to ensure Software Engineering has the answer: What is a helper? I have seen the name being used everywhere (module names, class names, method names), as if the semantics were deep and meaningful, but in the context of Computer Science (although I don't have a degree in it), I've never seen a description or definition anywhere! Is it a design pattern? Is it an algorithm? I once worked on a program in which the module and class were both called somethingsomethinghelper (where somethingsomething was fairly generic too) and I promptly renamed it to something that made sense to me, but I feel like I'm missing something here!
A Helper class is a lesser known code smell where a coder has identified some miscellaneous, commonly used operations and attempted to make them reusable by lumping them together in an unnatural grouping. Successive developers have then come onto the project and not realised that the helper class exists, and have consequently rewritten the same common operations, or even created more Helper classes. But seriously, the main problem with Helper classes is that they are usually operations that act on a specific class, which obviously means in OO terms that they are suffering from an acute case of Feature Envy . This failure to package the behaviour with the data it acts on is why developers so often (in my experience) fail to find it. In addition to this, as you have already identified SomethingSomethingHelper is actually a terrible name. It is undescriptive, and gives you no real inkling of what sort of operations the class does (it helps?), which also means that it's not obvious when adding new behaviours whether they belong in the Helper class or not. I would break up such classes along the lines of related behaviour that logically group together, and then rename the new classes to reflect what it does.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/247267", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/102438/" ] }
247,272
I am mainly a C programmer. In my world, writing likeThis or like_this is just a matter of style. In Haskell however, it seems that camelCase is the definite choice. Personally, I find the later much more readable. Think pthread_mutexattr_init vs PthreadMutexAttrInit . What's more, I have configured vim to swap the numbers and their alternate symbols (in C), since numbers happen to be written much less frequently than symbols such as parentheses, star, ampersand etc, which makes life easier on my wrist. As a bonus, this lets me write this_sort_of_thing without using the shift key. My question is, from the Haskell programmers, whether using underscore in names is acceptable to the Haskell community or not. Is camelCase an unwritten rule or common convention? Would it be ok to make the public functions likeThis but internally write like_this ?
A Helper class is a lesser known code smell where a coder has identified some miscellaneous, commonly used operations and attempted to make them reusable by lumping them together in an unnatural grouping. Successive developers have then come onto the project and not realised that the helper class exists, and have consequently rewritten the same common operations, or even created more Helper classes. But seriously, the main problem with Helper classes is that they are usually operations that act on a specific class, which obviously means in OO terms that they are suffering from an acute case of Feature Envy . This failure to package the behaviour with the data it acts on is why developers so often (in my experience) fail to find it. In addition to this, as you have already identified SomethingSomethingHelper is actually a terrible name. It is undescriptive, and gives you no real inkling of what sort of operations the class does (it helps?), which also means that it's not obvious when adding new behaviours whether they belong in the Helper class or not. I would break up such classes along the lines of related behaviour that logically group together, and then rename the new classes to reflect what it does.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/247272", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36503/" ] }
247,440
In reading about various sorting algorithms I've seen it mentioned that some are "stable" and some are not. What does that mean, and what tradeoffs are involved on that basis when selecting an algorithm?
A stable sort is one which preserves the original order of the input set, where the comparison algorithm does not distinguish between two or more items. Consider a sorting algorithm that sorts cards by rank , but not by suit. The stable sort will guarantee that the original order of cards having the same rank is preserved; the unstable sort will not.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/247440", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1767/" ] }
249,572
I have a colleague sitting next to me who designed an interface like this: public interface IEventGetter { public List<FooType> getFooList(String fooName, Date start, Date end) throws Exception; .... } The problem is, right now, we are not using this "end" parameter anywhere in our code, it's just there because we might have to use it some time in the future. We are trying to convince him it's a bad idea to put parameters into interfaces that are of no use right now, but he keeps on insisting that a lot of work will have to be done if we implement the use of "end" date some time later and have to adapt all the code then. Now, my question is, are there any sources that are handling a topic like this of "respected" coding gurus that we can link him to?
Invite him to learn about YAGNI . The Rationale part of Wikipedia page may be particularly interesting here: According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages: The time spent is taken from adding, testing or improving the necessary functionality. The new features must be debugged, documented, and supported. Any new feature imposes constraints on what can be done in the future, so an unnecessary feature may preclude needed features from being added in the future. Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work correctly, even if it eventually is needed. It leads to code bloat; the software becomes larger and more complicated. Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it. Adding the new feature may suggest other new features. If these new features are implemented as well, this could result in a snowball effect towards feature creep. Other possible arguments: “80% of the lifetime cost of a piece of software goes to maintenance” . Writing code just in time reduces the cost of the maintenance: one has to maintain less code, and can focus on the code actually needed. Source code is written once, but read dozens of times. An additional argument, not used anywhere, would lead to time wasted understanding why is there an argument which is not needed. Given that this is an interface with several possible implementations makes things only more difficult. Source code is expected to be self-documenting. The actual signature is misleading, since a reader would think that end affects either the result or the execution of the method. Persons writing concrete implementations of this interface may not understand that the last argument shouldn't be used, which would lead to different approaches: I don't need end , so I'll simply ignore its value, I don't need end , so I'll throw an exception if it is not null , I don't need end , but will try to somehow use it, I'll write lots of code which might be used later when end will be needed. But be aware that your colleague may be right. All previous points are based on the fact that refactoring is easy, so adding an argument later won't require much effort. But this is an interface, and as an interface, it may be used by several teams contributing to other parts of your product. This means that changing an interface could be particularly painful, in which case, YAGNI doesn't really apply here. The answer by h.j.k. gives a good solution: adding a method to an already used interface is not particularly hard, but in some cases, it has a substantial cost too: Some frameworks don't support overloads. For example and if I remember well (correct me if I'm wrong), .NET's WCF doesn't support overloads. If the interface has many concrete implementations, adding a method to the interface would require going through all the implementations and adding the method there too.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/249572", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/123404/" ] }
249,767
This macro can be defined in some global header, or better, as a compiler command line parameter: #define me (*this) And some usage example: some_header.h: inline void Update() { /* ... */ } main.cpp: #include "some_header.h" class A { public: void SetX(int x) { me.x = x; me.Update(); } void SomeOtherFunction() { ::Update(); } /* 100 or more lines ... */ void Update() { // ... } int x; }; So in a class method when I access a class member, I am always using me , and when accessing a global identifier I always use :: . This gives the reader which is not familiar with the code (probably myself after a few months) localized information of what is accessed without the need to look somewhere else. I want to define me because I find using this-> everywhere too noisy and ugly. But can #define me (*this) be considered a good C++ practice? Are there some practical problematic points with the me macro? And if you as C++ programmer will be the reader of some code using the me macro, would you like it or not? Edit: Because many people arguing not specificaly contra using me , but generaly contra explicit this. I think it may not be clear what are benefits of "explicit this everywhere". What are benefits of "explicit this everywhere"? As a reader of the code you have certainty what is accessed and you can concentrate on different things than verify - in some distant code - that is really accessed what you think is accessed. You can use search function more specifically. Search " this->x " can give you more wanted results than only search " x " When you are deleting or renaming some member, compiler reliably notifies you at places where is this member used. (Some global function can have same name and exist chance you can introduce error if you are not using explicit this). When you are refactoring code and making non-member function from member (to make better encapsulation) explicit this shows you place which you must edit and you can easily replace this with pointer to instance of class given as non-member function parameter Generally when you are changing code, there are more posibilities to errors when you are not using explicit this than when you are use explicit this everywhere. Explicit this is less noisy than explicit „m_“ when you are acessing member from outside ( object.member vs object.m_member ) (thanks to @Kaz to spot this point) Explicit this solves problem universaly for all members – attributes and methods, whereas „m_“ or other prefix is practicaly usable only for attributes. I would like to polish and extend this list, tell me if you know about other advantages and use cases for explicit this everywhere .
No, it is not. Mind the programmer who will maintain your code several years from now long after you've left for greener pastures and follow common conventions of the language you use. In C++, you almost never have to write this , because the class is included in symbol resolution order and when a symbol is found in class scope, this-> is implied. So just don't write it like everybody does. If you often get confused which symbols come from class scope, the usual approach is using common naming pattern for members (fields and sometimes private methods; I haven't seen it used for public methods). Common ones include suffixing with _ or prefixing with m_ or m .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/249767", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124070/" ] }
249,892
I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code . This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. Do I rewrite, or just keep trying to ship, or is there another option that I've missed?
You have stumbled on the achilles heel of most CS educations: they teach you the tools and techniques, but not the trade. Building software is a craft, one which you only acquire through years of practice and the experience of having your software used (users are much harsher critics than teachers). Building software is also quite often a business, one where the business goals may override the technical ambitions. First of all, ship. If you show the business owner the software, and they feel it's ready to ship, then ship. If it's not to that point, but close, finish it. The only software that matters is that which is actually used. The only software business which earns money is one which has a product. Secondly, you have learned a lot of valuable things, so you should appreciate the experience for what it has taught you : Slinging code without a plan or architecture is a recipe for disaster There is much more to programming than writing code Non-technical business owners often do not understand the impact of technical decisions (like who to hire), and it is up to the developers to explain things to them. Most problems are already solved much better than you would solve them, in existing frameworks. It pays to know the frameworks that exist and when to use them. People fresh out of school assigned to a big project with little guidance tend to produce a bowl of spaghetti code. This is normal. Here is some more advice for you on how to proceed: Communicate, communicate, communicate. You must be very open and frank about the state of the project and your ideas on how to proceed, even if you're unsure and see multiple paths. This leaves the business owner the choice on what to do. If you keep knowledge to yourself, you deprive them of choices. Resist the temptation of the full rewrite. While you are rewriting, the business has no product. Also, a rewrite rarely turns out as good as you imagined it. Instead choose an architecture and migrate the codebase to it gradually. Even a horrible codebase can be salvaged this way. Read books about refactoring to help you along. Learn about automated testing / unit testing . You have to build up confidence in the code, and the way to do that is to cover it with automated tests. This goes hand-in-hand with refactoring. As long as you don't have the tests, test manually and comprehensively (try to break your code, because your users will do so). Log all the bugs you find so you can prioritize and fix them (you won't have time to fix all bugs, no software ships bug-free in the real world). Learn about how to deploy a web application and keep it running. The book Web Operations: Keeping the Data On Time is a good start.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/249892", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/142698/" ] }
249,933
For every a and b which are non-const pointers of the same type, you can do a = b; , right? Inside non-const member functions the this keyword exists, which is a non-const pointer. So logicaly if b is same type as this you can also do this = b; right? Wrong. You cannot do this = b; , because this uses pointer syntax but logically this is a reference! But why on earth is this syntactically a pointer but logically reference? Can this weird behavior be corrected in the next C++ standard, by introducing a new keyword, for example me which will be a reference not only logically but also syntactically? (See also my attempt to solve this here: " Is it a good idea to "#define me (*this)"? ")
this is (like nullptr ) a constant pointer ; the pointed data is const if and only if this appears in the body of a const member function. You cannot change a constant pointer, like you cannot change a constant literal like 23 . So assignment to this like this = p; // WRONG is prohibited for the same reasons assignment to nullptr is forbidden: nullptr = 0; // wrong
{ "source": [ "https://softwareengineering.stackexchange.com/questions/249933", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124070/" ] }
250,035
I have just begun to learn Django/Python/Web Development. This problem has been troubling me for a while now. I am creating an application with multiple templates in Django. I have a views.py which is basically just rendering the responses to the respective templates and I have a models.py where I have structured my DB. In one of my templates, I need to upload an image (which I am able to do) and I need to run a logic which is based on the features of the uploaded image (not yet done). This logic involves a lot of heavy calculations. After performing the calculations, the logic should return some processed information (coordinates) to the template. I have been able to do all this actions successfully in a standalone python desktop application calling python files one after the other. However, since I now want to make this a web application I have begun using the Django framework. I have done a lot of searching but I am still not able to figure out where exactly should I place this Python file containing all the logic. Should I have another class based file (logic.py) and call it from the view.py ? I googled and found that many developers are placing their business logic in their models.py in Django. However, I feel it is intuitively not right since model should exclusively communicate with the back end. Any help would be appreciated.Thanks in advance.
I have done a lot of searching but I am still not able to figure out where exactly should I place this Python file containing all the logic. There are a number of options, depending on what your requirements are: Add the logic to e.g. the Image model. This is a useful option if you need to store per-image meta data in the database, and each model instance (each image) is processed by itself. Add the logic as a plain Python Image class, e.g. in a file called image.py . Nothing in Django restricts you from adding logic other than that in the views or models modules. This is a good option if the image logic is a central component of your Django app (e.g. a Image processing app). Create a separate Python project that provides the logic, then call it from your views. Make sure to install this project in your Django app's Python environment. This option is valid if the purpose of your Django app is to upload and view images, or to show the results of the image processing in direct response to a user's request, but where the image processing could be used by other projects too. Create a separate app that processes requests asynchronously and is run separately from your Django app. This option is useful if you need to decouple the image processing from the request cycle of the app, process large number of images, or where each calculation takes too much time to solve within a request cycle's time (say within at most 500ms to 1s). I feel it is intuitively not right since model should exclusively communicate with the back end. There is nothing in Django that requires a model to communicate with the back end, or rather the database. I think you are mixing the semantics of what Django typically considers a model (namely, an abstraction of one or several tables in the database), v.s. the term model as a design construct (e.g. as in Domain Driven Design).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250035", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139231/" ] }
250,073
I'm learning DDD and yet I have more questions than answers. Let's consider a model of a directory containing enormous number of files. Here is how I see it: Directory is an Aggregate root. This entity should have the validation logic of checking file name uniqueness when it is added or just renamed. And File entity contains the 'SetName' logic, notifying Directory via Domain Event about name changes. But how should Directory then work? It is not always possible to load all files into memory. Should in this case Files repository have adhoc logic for checking name uniqueness? I suppose it is a viable decision. However, what if some files have been already added or renamed withing current not yet commited transaction? (nothing prohibits that. Transaction boundaries are set externally in relation to business logic). Probably repository should take into account both in-memory and persisted states (merging these states can be nontrivial task.) So, when aggregate root with all its children fits in memory - everything is fine. And as soon as you can not materialize all entities there are troubles. I'd like to know what are the approaches for such situations. May be there is no problem at all and it is just because of my misunderstanding of the subject.
My answer is biased with Vaughn Vernon's Implementing Domain Driven Design great book (a must read) 1. Favor small aggregates. If I'm to model your domain, I would model a Directory as an aggregate and File as another aggregate. 2. Reference aggregates by ids. Therefore Directory will have a collection of FileId value objects. 3. Use factories to create aggregates. For a simple case a factory method may be enough Directory.addFile(FileName fileName) . However, for more complex cases I would use a domain factory. The domain factory could validate that the fileName is unique using a FileRepository and a UniquefileNameValidator infrastructure service. Why model File as a separate aggregate? Because Directories aren't made of Files . a File is associated with a certain Directory . Also, think of a directory that has thousands of files. Loading all these objects into memory each time a directory is fetched is a performance killer. Model your aggregates according to your use cases. If you know that there will never be more than 2-3 files in a directory then you can model them all as a single aggregate, but in my experience business rules change all the time and it pays if your model was flexible enough to accommodate the changes. Obligatory read Effective Aggregate Design by Vaughn Vernon
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250073", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/71962/" ] }
250,283
A friend of mine is working for a small company on a project every developer would hate: he's pressured to release as quickly as possible, he's the only one who seem to care about technical debt, the customer has no technical background, etc. He told me a story which made me think about the appropriateness of design patterns in projects like this one. Here's the story. We had to display products at different places on the website. For example, content managers could view the products, but also the end users or the partners through the API. Sometimes, information was missing from the products: for example, a bunch of them didn't have any price when the product was just created, but the price wasn't specified yet. Some didn't have a description (the description being a complex object with modification histories, localized content, etc.). Some were lacking shipment information. Inspired by my recent readings about design patterns, I thought this was an excellent opportunity to use the magical Null Object pattern . So I did it, and everything was smooth and clean. One just had to call product.Price.ToString("c") to display the price, or product.Description.Current to show the description; no conditional stuff required. Until, one day, the stakeholder asked to display it differently in the API, by having a null in JSON. And also differently for content managers by showing "Price unspecified [Change]". And I had to murder my beloved Null Object pattern, because there was no need for it any longer. In the same way, I had to remove a few abstract factories and a few builders, I ended up replacing my beautiful Facade pattern by direct and ugly calls, because the underlying interfaces changed twice per day for three months, and even the Singleton left me when the requirements told that the concerned object had to be different depending on the context. More than three weeks of work consisted of adding design patterns, then tearing them apart one month later, and my code finally became spaghetti enough to be impossible to maintain by anyone, including myself. Wouldn't it be better to never use those patterns in the first place? Indeed, I had to work myself on those types of projects where the requirements are changing constantly, and are dictated by persons who don't really have in mind the cohesion or the coherence of the product. In this context, it doesn't matter how agile you are, you'll come with an elegant solution to a problem, and when you finally implement it, you learn that the requirements changed so drastically, that your elegant solution doesn't fit any longer. What would be the solution in this case? Not using any design patterns, stop thinking, and write code directly? It would be interesting to do an experience where a team is writing code directly, while another one is thinking twice before typing, taking the risk of having to throw away the original design a few days later: who knows, maybe both teams would have the same technical debt. In the absence of such data, I would only assert that it doesn't feel right to type code without prior thinking when working on a 20 man-month project. Keep the design pattern which doesn't make sense any longer, and try to add more patterns for the newly created situation? This doesn't seem right neither. Patterns are used to simplify the understanding of the code; put too much patterns, and the code will become a mess. Start thinking of a new design which encompasses the new requirements, then slowly refactor the old design into the new one? As a theoretician and the one who favors Agile, I'm totally into it. In practice, when you know that you'll have to get back to the whiteboard every week and redo the large part of the previous design and that the customer just doesn't have enough funds to pay you for that, nor enough time to wait, this probably won't work. So, any suggestions?
I see some wrong assumptions in this question: code with design patterns, though applied correctly, needs more time to be implemented than code without those patterns. Design patterns are no end in itself, they should serve you, not vice versa. If a design pattern does not make the code easier to implement, or at least better evolvable (that means: easier to be adapted to changing requirements), then the pattern misses its purpose. Don't apply patterns when they don't make "life" easier for the team. If the new Null object pattern was serving your friend for the time he used it, then everything was fine. If it was to be eliminated later, then this could be also ok. If the Null object pattern slowed the (correct) implementation down, then its usage was wrong. Note, from this part of the story one cannot conclude any cause of "spaghetti code" so far. the customer is to blame because he has no technical background and does not care about cohesion or the coherence of the product That is neither his job nor his fault! Your job is to care about cohesion and coherence. When the requirements change twice a day, your solution should not be to sacrifice the code quality. Just tell the customer how long it takes, and if you think you need more time for getting the design "right", then add a big enough safety margin to any estimation. Especially when you have a customer trying to pressure you, use the "Scotty Principle" . And when arguing with a non-technical customer about the effort, avoid terms like "refactoring", "unit-tests", "design patterns" or "code documentation" - that are things he does not understand and probably regards to be "unnecessary nonsense" because he sees no value in it. Always talk about things which are visible or at least understandable to the customer (features, sub-features, behaviour changes, user docs, error fixes, performance optimization, and so on). the solution for rapid changing requirements is to rapidly change the code Honestly, if "underlying interfaces change twice per day for three months", then the solution should not be to react by changing the code twice a day. The real solution is to ask why the requirements change so often and if it is possible to make a change at that part of the process. Maybe some more upfront-analysis will help. Maybe the interface is too broad because the boundary between components is chosen wrong. Sometimes it helps to ask for more information regarding which part of the requirements are stable, and which are still under discussion (and actually postpone the implementation for things under discussion). And sometimes some people just have to be "kicked in their asses" for not changing their minds twice a day.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250283", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6605/" ] }
250,481
Is it a bad practice to give two very different files with the same general purpose the same name, separating them into different directories? <script src="client_scripts/app/player_stats/generator.js"></script> <script src="client_scripts/app/coach_settings/generator.js"></script> I'd like to keep my file names short, and both of the files have the same general purpose without being identical. I'm not sure whether or not this would be considered a bad practice in a professional programming environment. I'd like to know what the best practice is in this situation. Alternatively, at the expense of the name's short length, I could use: <script src="client_scripts/app/player_stats/player_stats_generator.js"></script> <script src="client_scripts/app/coach_settings/coach_settings_generator.js"></script>
Consider the cost/benefit ratio of your two options: Would reusing the same name cause confusion or naming conflicts? Probably not, since they're in different folders. The name "player_stats/generator.js" is equivalent to "player_stats_generator.js". However, if you see, in the future, a reason to merge your js files into a single directory (deployment? I dunno), then this should be a good indicator to give unique names. Would using the longer names involve a lot of extraneous typing? Probably not. Not only do many JS IDEs autocomplete filenames in the project for you, it's also a piece of code that's probably only written - at most - once per file. The code that gets typed a lot is the classes and functions inside the js files, and those (hopefully) don't conflict. When debugging, what sort of information do you get about an error? If the most common bug report is "Error in line 34 of <filename.js> ", then consider giving them unique names, since receiving errors in just generator.js and then trying to divine, through context, which generator it was can be a hassle.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250481", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139797/" ] }
250,540
I'm working in a java role after working a couple years in functional programming. Our company was bought by google and I took a java role after the acquisition. Coming back to java as a polyglot developer, I'm generating getters for immutable objects and I'm seeing it's a total hogwash. Are there any reasons why the 'getField' convention should be used so prolifically? To me it seems almost horrifying at this point that so many libraries expect public getter methods to work with their functionality when simply making a field public and final would have the same effect as making only a getter on a public mutable field. Why isn't it more of a common practice to ditch the setters and just expose a final field?
For flexibility -- this has to do with what happens when you change the class later. If foo.x is a publicly accessible member (even a final one), and you decide that you no longer need that member, code which accessed that method is now broken. If you have only provided a getter, then you can always provide a compatibility version of that getter which computes the value as needed, and code which worked before still works. For example, suppose we are writing a Point class, and we decide to go from vector (x,y) to polar (angle, magnitude) representation to internally store the point's position. Code which looks for a member named x is now broken -- but I can always provide a method named getX() which computes the X coordinate of the point and returns it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250540", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/102174/" ] }
250,635
I have been taught that shifting in binary is much more efficient than multiplying by 2^k. So I wanted to experiment, and I used the following code to test this out: #include <time.h> #include <stdio.h> int main() { clock_t launch = clock(); int test = 0x01; int runs; //simple loop that oscillates between int 1 and int 2 for (runs = 0; runs < 100000000; runs++) { // I first compiled + ran it a few times with this: test *= 2; // then I recompiled + ran it a few times with: test <<= 1; // set back to 1 each time test >>= 1; } clock_t done = clock(); double diff = (done - launch); printf("%f\n",diff); } For both versions, the print out was approximately 440000, give or take 10000. There was no (visually, at least) significant difference between the two versions' outputs. So my question is, is there something wrong with my methodology? Should there even be a visual difference? Does this have something to do with the architecture of my computer, the compiler, or something else?
As said in the other answer, most compilers will automatically optimize multiplications to be done with bitshifts. This is a very general rule when optimizing: Most 'optimizations' will actually misguide the compile about what you really mean, and might even lessen the performance. Only optimize when you have noticed a performance problem and measured what the problem is. (and most code we write doesn't get executed that often, so we don't need to bother) The big downside to optimizing is that the 'optimized' code is often much less readable. So in your case, always go for multiplication when you are looking to multiply. And go for bit shifting when you want to move bits.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250635", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143516/" ] }
250,653
While I was writing a private helper method in Java, I needed to add a check for a precondition that would cause the method to do nothing if not met. The last few lines of the method were just off the bottom of the editing area in the IDE, so in an epic display of laziness, I wrote the check like this: function(){ if (precondition == false){ return; } //Do stuff } as to opposed finding the end of the block to write a more "traditional" statement like this: function(){ if (precondition == true){ //Do stuff } } This made me wonder: is there a reason to avoid my version in favor of the other, or is it just a stylistic difference? (Assuming they are used in such a way that they are meant to be functionally equivalent)
Those are called Guard Clauses. They're actually desirable . They are one of several techniques that can be used to decompose complex nested if conditions into simpler (and easier to follow) logic. As an example, this code: public Foo merge (Foo a, Foo b) { if (a == null) return b; if (b == null) return a; // complicated merge code goes here. } Replaces the more complicated, but having a single exit point: public Foo merge (Foo a, Foo b) { Foo result; if (a != null) { if (b != null) { // complicated merge code goes here. } else { result = a; } } else { result = b; } return result; } http://c2.com/cgi/wiki?GuardClause
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250653", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/101176/" ] }
250,707
I was working on a project three months back, and then suddenly another urgent project appeared and I was asked to shift my attention. Starting tomorrow, I'll be heading back to the old project. I realize that I do not remember what exactly I was doing. I don't know where to begin. How can I document a project such that anytime I look back it shouldn't take me more than a few minutes to get going from wherever I left. Are there best practices?
To-do lists are magic. Generally you need to keep an active to-do list for each project and even while you're busy programming, if you think of something that has to be done and you can't do it immediately, then it goes on the list. Keep this list in a well-known place, either in a spreadsheet or text file in the project folder electronically, or in your paper logbook. Also, whenever you leave the project for overnight (or over the weekend), take a post-it note and write the next thing you were going to do on the note, and stick it to the monitor. That makes it more likely you'll get back into it quickly the next morning. Edit : I should mention that to-do lists (specifically prioritized to-do lists segregated by venue and project) are a key part of the Getting Things Done book, which I found highly influential.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250707", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23355/" ] }
250,764
I am currently working on a Ruby on Rails project which shows a list of images. A must-have for this project is that it shows new posts in realtime without the need of refreshing the web page. After searching for a while, I've stumbled upon some JavaScript solutions and services such as PubNub; however, none of the provided solutions made sense at all. In the JavaScript solution ( polling ) the following happens: User 1 views the list of photos. In the background the JavaScript code is polling an endpoint every second to see if there is a new post. User 2 adds a new photo. There is a delay of 50 ms before the new cycle is triggered and fetches the new data. The new content is loaded in the DOM . This seems odd when translated to a real world example: User 1 holds a pile of pictures on his/her desk. He/she walks to the photographer every second and asks if he has a new one. The photographer makes a new photo. This second when he/she walks in, she can take the picture and put it on the pile. In my opinion the solution should be as following: User 1 holds a pile of pictures on his/her desk. The photographer takes a new picture. The photographer walks to the pile and puts it with the rest. The PubNub solution is basically the same, however this time there is an intern walking between the parties to share the data. Needless to say, both solutions are very energy consuming as they are triggered even when there is no data to load. As far as my knowledge goes there is no (logic) explanation why this way of implementation is used in almost every realtime application.
Pushing works well for 1, or a limited number of users. Now change the scenario with one photographer and 1000 users that all want a copy of the picture. The photographer will have to walk to 1000 piles. Some of them might be in locked office, or spread all over the floor. Or their user on vacation, and not interested in new pictures at the moment. The photographer would be busy walking all the time and not take new pictures. Fundamentally: a pull/poll model scales better to lots of unreliable readers with loose realtime requirements (if a picture takes 10 seconds later to arrive on a pile, what's the big deal). That said, a push model is still better in a lot of situations. If you need low latency (you need that new photo 5s after it's taken), or updates are rare and requests frequent and predictable (keep asking the photographer every 10 seconds when he generates a new picture a day), then pulling is inappropriate. It depends on what you're trying to do. NASDAQ: push. Weather service: pull. Wedding photographer: probably pull. News photo agency: probably push.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250764", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143641/" ] }
250,811
I ran into an interesting theoretical problem a number of years ago. I never found a solution, and it continues to haunt me when I sleep. Suppose you have a (C#) application that holds some number in an int, called x. (The value of x is not fixed). When the program is run, x is multiplied by 33 and then written to a file. Basic source code looks like this: int x = getSomeInt(); x = x * 33; file.WriteLine(x); // Writes x to the file in decimal format Some years later, you discover that you need the original values of X back. Some calculations are simple: Just divide the number in the file by 33. However, in other cases, X is large enough that the multiplication caused an integer overflow. According to the docs , C# will truncate the high-order bits until the number is less than int.MaxValue . Is it possible, in this case, to either: Recover X itself or Recover a list of possible values for X? It seems to me (though my logic could certainly be flawed) that one or both should be possible, since the simpler case of addition works (Essentially if you add 10 to X and it wraps, you can subtract 10 and wind up with X again) and multiplication is simply repeated addition. Also helping (I believe) is the fact that X is multiplied by the same value in all cases - a constant 33. This has been dancing around my skull at odd moments for years. It'll occur to me, I'll spend some time trying to think through it, and then I'll forget about it for a few months. I'm tired of chasing this problem! Can anyone offer insight? (Side note: I really don't know how to tag this one. Suggestions welcome.) Edit: Let me clarify that if I can get a list of possible values for X, there are other tests I could do to help me narrow it down to the original value.
Multiply by 1041204193. When the result of a multiplication doesn't fit in an int, you won't get the exact result, but you will get a number equivalent to the exact result modulo 2**32 . That means that if the number you multiplied by was coprime to 2**32 (which just means it has to be odd), you can multiply by its multiplicative inverse to get your number back. Wolfram Alpha or the extended Euclidean algorithm can tell us 33's multiplicative inverse modulo 2**32 is 1041204193. So, multiply by 1041204193, and you have the original x back. If we had, say, 60 instead of 33, we wouldn't be able to recover the original number, but we would be able to narrow it down to a few possibilities. By factoring 60 into 4*15, computing the inverse of 15 mod 2**32, and multiplying by that, we can recover 4 times the original number, leaving only 2 high-order bits of the number to brute-force. Wolfram Alpha gives us 4008636143 for the inverse, which doesn't fit in an int, but that's okay. We just find a number equivalent to 4008636143 mod 2**32, or force it into an int anyway to have the compiler do that for us, and the result will also be an inverse of 15 mod 2**32. ( We get -286331153. )
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250811", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143698/" ] }
250,914
In C#, the following code is valid interface I{ int property{get;set;} } Which doesn't make any sense to me. This seems to break one of the most important principles of interfaces: lack of state (in other words, no fields). Doesn't the property create an implicit private field? Wouldn't that be really bad for interfaces?
I think the confusing part is that if you write int Property { get; set; } inside a class, then it's an auto-property with implicit backing field. But if you write exactly the same thing in an interface, then it's not auto-property , it just declares that the property is part of the interface and that any type that implements the interface has to contain that property (as auto-property or not), but it doesn't create the backing field. One way to see the difference is to write int Property { get; } : this is valid in an interface and declares a property that has only a getter, but no setter. But it won't compile in a class (unless you're using C# 6.0), because auto-property has to have a setter.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250914", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/136927/" ] }
250,991
It seems that thread-safety is always/often mentioned as the main benefit of using immutable types and especially collections. I have a situation where I would like to make sure that a method will not modify a dictionary of strings (which are immutable in C#). I’d like to constrain things as much as possible. However I am not sure whether adding a dependency to a new package (Microsoft Immutable Collections) is worth it. Performance is not a big issue either. So I guess my question is whether immutable collections are strongly advised for when there are no hard performance requirements and there are no thread safety issues? Consider that value semantics (as in my example) might or might not be a requirement as well.
Immutability simplifies the amount of information you need to track mentally when reading code later . For mutable variables, and especially mutable class members, it's very hard to know what state they will be in at the specific line you're reading about, without running through the code with a debugger. Immutable data is easy to reason about - it will always be the same. If you want to change it, you need to make a new value. I would honestly prefer making things immutable by default , and then changing them to mutable where it's proven that they need to be, whether this means you need the performance, or an algorithm you have doesn't make sense for immutability.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/250991", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13154/" ] }
251,021
Here are some examples of temporary/local code. It is needed in order to work with the codebase, but would be harmful to be part of it: Project files. Paths may need to be edited in order to reflect the layout on the current PC. Makefiles. For example optimization may need to be turned off during debugging, but not for the CI server. Dirty ugly hacks. For example return 7 in the middle of a function, in order to test something, depending on the function, and suspected to break at value of 7. Or 7 is the code of the not yet implemented button, that I am implementing and need to test throughout the life of my branch. I have tried keeping those in a git commit that I always rebase to the top before pushing to the repo and then push HEAD~ . This is quite inconvenient and doesn't work with svn. Stashing scares me even more - "did I remember to pop after pushing??". Keeping the code out of version control introduces unpleasant noise everytime a commit is being assembled plus it might accidentally get introduced into a commit some friday evening. What would be a sane solution for such throw-away code?
All code is temporary. When I'm making changes I will introduce placeholders occasionally - that icon that I drew waiting for the real one from the designer, the function I know will call the library that my colleague is writing and hasn't yet finished (or started), the extra logging that will be removed or otherwise made conditional, the bugs that I will get around to fixing once they've been noticed by the test team, etc So check everything in. Use a feature branch for all your development, then you can merge the final version into trunk and no-one will need to know what hacks and bodges and fixes you made during your development cycle, they'll only need to see the final version. But if you've committed to your branch regularly, you'll be able to see the things that were worth keeping if a day went spectacularly wrong, or you continued coding after a lunchtime down the pub. Version control is not a artifact repository or document storage system. Its about holding the history of changes. Stick everything you like in there becuase one day you might want to see what it was, and those are the days you realise what your SCM is truly about. PS. Truly temporary files (eg .obj or build artifacts) have no place in your SCM. These are things that have no value to anyone. You can tell what they are - if you delete them you don't mind, or even notice they're gone.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251021", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54268/" ] }
251,110
One of the things I run into often is problems caused by programs which don't conform to ISO standardss. One example would be not using the ISO country tables but making up their own shorthands, which goes okay for the United States (US), or the Netherlands (NL) but goes spectacularly wrong for the United Kingdom (GB, not UK) or Spain (ES, not SP) and a lot of other countries. As another example, internal date notations. Why would anyone store a date as 01/02/2014 ever? It is completely unclear whether that is 1st February or January 2nd, whereas if you use the ISO standard you just store 2014-02-01* and it's unambiguously February 1st. My question: When and why should a programmer make up their own constructs when there is an ISO standard available? * Store 2014-02-01, and format the date accordingly when showing it to an end user.
Never attribute to malice that which is adequately explained by stupidity. -- Robert J Hanlon . That, and a lack of communication. So, it's not a conspiracy of anti-ISO sentiment making people think "I know, I'll use UK instead of GB", nor is it an inclination that "they know better", or even a sense that the standard is no good. It'll be entirely because they just don't know it is there, and they should use it. I mean, for some people, if it's not bundled into Visual Studio , it might as well not exist. For some others, maybe they just don't want the full set or it's too difficult to fetch the definitive list, so they just make up their own sub-set to solve their immediate situation. For others, the default is what gets used - so date formatting isn't "formatted in ISO, or even country locale", it's "formatted in whatever comes out" and if that suits them, then it's job done (this is usually a criticism of American programmers).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251110", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51367/" ] }
251,250
If I write a C program and compile it to an .exe file, the .exe file contains raw machine instructions to the CPU. (I think). If so, how is it possible for me to run the compiled file on any computer that runs a modern version of Windows? Each family of CPUs has a different instruction set. So how come any computer that runs the appropriate OS can understand the instructions in my .exe file, regardless of it's physical CPU? Also, often in websites in the "download" page of some application, you have a download for Windows, for Linux, and for Mac (often two downloads for each OS, for 86 and 64 bit computers). Why aren't there many more downloads, for each family of CPUs?
Executables do depend on both the OS and the CPU: Instruction Set: The binary instructions in the executable are decoded by the CPU according to some instruction set. Most consumer CPUs support the x86 (“32bit”) and/or AMD64 (“64bit”) instruction sets. A program can be compiled for either of these instruction sets, but not both. There are extensions to these instruction sets; support for these can be queried at runtime. Such extensions offer SIMD support, for example. Optimizing compilers might try to take advantage of these extensions if they are present, but usually also offer a code path that works without any extensions. Binary Format: The executable has to conform to a certain binary format, which allows the operating system to correctly load, initialize, and start the program. Windows mainly uses the Portable Executable format, while Linux uses ELF. System APIs: The program may be using libraries, which have to be present on the executing system. If a program uses functions from Windows APIs, it can't be run on Linux. In the Unix world, the central operating system APIs have been standardized to POSIX: a program using only the POSIX functions will be able to run on any conformant Unix system, such as Mac OS X and Solaris. So if two systems offers the same system APIs and libraries, run on the same instruction set, and use the same binary format, then a program compiled for one system will also run on the other. However, there are ways to achieve more compatibility: Systems running on the AMD64 instruction set will commonly also run x86 executables. The binary format indicates which mode to run. Handling both 32bit and 64bit programs requires additional effort by the operating system. Some binary formats allow a file to contain multiple versions of a program, compiled for different instruction sets. Such “fat binaries” were encouraged by Apple while they transitioning from the PowerPC architecture to x86. Some programs are not compiled to machine code, but to some intermediate representation. This is then translated on-the-fly to actual instructions, or might be interpreted. This makes a program independent from the specific architecture. Such a strategy was used on the UCSD p-System. One operating system can support multiple binary formats. Windows is quite backwards compatible and still supports formats from the DOS era. On Linux, Wine allows the Windows formats to be loaded. The APIs of one operating system can be reimplemented for another host OS. On Windows, Cygwin and the POSIX subsystem can be used to get a (mostly) POSIX-compliant environment. On Linux, Wine reimplements many of the Windows APIs. Cross-platform libraries allow a program to be independent of the OS APIs. Many programming languages have standard libraries that try to achieve this, e.g. Java and C. An emulator simulates a different system by parsing the foreign binary format, interpreting the instructions, and offering a reimplementation of all required APIs. Emulators are commonly used to run old Nitendo games on a modern PC.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251250", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
251,349
Generally speaking, is it better to make all the functional parts or get UI working first - or a mix of the two? Assuming you're working on something large, is it generally accepted practice to get all the functional data harvesting blobs working before any UI, get all the UI working one piece at a time as you go, or something in the middle? We all know to break work down in to manageable pieces, but the question is ultimately whether UI is included in manageable pieces I suppose. For the case of the example, consider a GUI application with one root window, but over a dozen tabs in various docks to separate different data components. Each individual tab has a relatively complex set of moving parts behind it from a functional units perspective. The example application in this particular question is here with the accompanying blog and the original commercial product .
There is a general conception amongst many business users and clients that when it looks complete, it is almost complete. As you likely know, this is far from the truth. One can have it looking nice, but with no backend and some users think that making it look nice is 80% of the work, not 20% ( or the other 80% ). Countless developers can tell horror stories of this - getting a mockup of the pages done in Microsoft Word using screen shots of some other tool, and the client saying "so, you've almost got it done?" You need to pace it so that all parts are done when its done. Trying to do all the backend first and then all the front end will lead to the end user thinking you're not doing anything and asking why you are getting paid when there is nothing to show for it. On the other hand, front end first and you'll find the end user making niggling changes and consuming all our time. The worst case with a 'one first and the the other' is when you get to the other part, you find that it doesn't fit the design at all. Thus, build both. Show progress in the front end, make the back end work with what you are building. In many cases its a good idea to deliver incremental builds and make sure you're making what the client wants (this gets into Agile). Going too long with out 'visible advances' can hurt the client relationship (this is for both cases of 'everything looks done early' and 'nothing is done until the very end' - its hard for the client to see the framework being written or the unit tests or data sanitization as progress). Joel wrote about this in The Iceberg Secret, Revealed : Important Corollary Two. If you show a nonprogrammer a screen which has a user interface which is 100% beautiful, they will think the program is almost done. People who aren't programmers are just looking at the screen and seeing some pixels. And if the pixels look like they make up a program which does something, they think "oh, gosh, how much harder could it be to make it actually work?" The big risk here is that if you mock up the UI first, presumably so you can get some conversations going with the customer, then everybody's going to think you're almost done. And then when you spend the next year working "under the covers," so to speak, nobody will really see what you're doing and they'll think it's nothing. This is again reiterated in the blog post Don't make the Demo look Done which has this helpful graph: Note here the two options generally reflect the 'get the ui done' (and then the expectation is that you'll be done soon) and 'get the backend done' (and then the customer is worried about you missing the deadline). How 'done' something looks should match how 'done' something is. Every software developer has experienced this many times in their career. But desktop publishing tools lead to the same headache for tech writers--if you show someone a rough draft that's perfectly fonted and formatted, they see it as more done than you'd like. We need a match between where we are and where others perceive we are. This article also brings up an important point about the type of feedback you get with different levels of doneness of the user interface. If you have something that looks complete, you're more likely to get feedback about "could you change the font" than "this layout doesn't work - there are too many tabs." For those who are fighting with this in the Java Swing world, there's a look and feel called Napkin which makes it so that the UI doesn't look complete (even if it is). The key here is to make it so that it doesn't look done. Having the UI look complete is a signal to many business users that the application is complete (even if its just a few static pages without any logic behind it or something built in an interface builder). Further reading (and links from article): The Iceberg Secret, Revealed Don't make the Demo look Done Napkin Look & Feel for Swing Why xkcd-style graphs are important
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251349", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4534/" ] }
251,431
The first compiler was written by Grace Hopper in 1952 while the Lisp interpreter was written in 1958 by John McCarthy's student Steve Russell. Writing a compiler seems like a much harder problem than an interpreter. If that is so, why was the first compiler written six-years before the first interpreter?
Writing a compiler seems like a much harder problem than an interpreter. That might be true today, but I would argue that it was not the case some 60 years ago. A few reasons why: With an interpreter, you have to keep both it and the program in memory. In an age where 1kb of memory was a massive luxury, keeping the running memory footprint low was key. And interpreting requires a bit more memory than running a compiled program. Modern CPUs are extremely complex with huge catalogs of instructions. So writing a good compiler is truly a challenge. Old CPUs were much simpler, so even compilation was simpler. Modern languages are much more complex than old languages, so even compilers are much more complex. Old languages would thus have simpler compilers.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251431", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/144309/" ] }
251,443
Background I'm trying to decide the best way to manage a change in requirements. Currently when an admin is assigned to a ticket, the ticket header record is given an owner and the ownerID field is updated with the value of the id which corresponds to the record in the TeamMembers table. The owner (Team member) is notified when a user (Customer) updates the ticket with a comment. I now need to add additional Team Members to the ticket so they are also notified. Question ? Do I... Make ownerID field ownerIDs and store a comma separated list of IDs in a string ? Add a new field additionalOwnerIDs and store any extra IDs after the main owner as a comma separated string ? Create a new table that simply links ticketID to assigned owners and have one row per owner ? Do something else ? Considerations I want to deliver the new functionality as quickly as possible, repurposing the existing field might seem like the simplest way forward but it would involve changing the schema type of existing data so will need a migration script to rewrite hundreds of thousands of rows. Adding a new field then wouldn't need as much migration work as it would be an empty string for all existing records and default to an empty string for new records. A new link table seems the most programatically correct way (IMHO) but would require the most time to implement and test it and the migrations would be the most complicated in terms of extracting existing ticket owners into the new table. Changing the code for retrieving / updating an owner to query the new table. Then removing the defunct ownerID column from the existing table. Have a missed anything? Which approach is the best in terms of maintaining the code base in the future?
Writing a compiler seems like a much harder problem than an interpreter. That might be true today, but I would argue that it was not the case some 60 years ago. A few reasons why: With an interpreter, you have to keep both it and the program in memory. In an age where 1kb of memory was a massive luxury, keeping the running memory footprint low was key. And interpreting requires a bit more memory than running a compiled program. Modern CPUs are extremely complex with huge catalogs of instructions. So writing a good compiler is truly a challenge. Old CPUs were much simpler, so even compilation was simpler. Modern languages are much more complex than old languages, so even compilers are much more complex. Old languages would thus have simpler compilers.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251443", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41799/" ] }
251,587
During a meeting regarding the rollback of a third-party SDK from the latest version it was noted that the our developers already flagged in the commit history that the latest version should not be used. Some developers argued that this was a bad practice and it should have instead been noted either in the source file (i.e. // Don't upgrade SDK Version x.y.z, see ticket 1234 ) or in a project level README file. Others argued that since the commit history is part of the project documentation it is an acceptable location for such information since we should all be reading it anyway. Should the commit history be used to convey critical information to other developers or should such information be duplicated to another location such as a project README or comments in the relevant source file?
If I was going to look at upgrading to a newer version of a third party SDK, the last place I'd look is in the history of the source control system. If your product is using version 2.0 of an SDK and someone is interested in upgrading to 3.0, I don't think it's reasonable to think that they should look backwards in time in your source control system to find out that it's not a good idea. Here, we have a team wiki that has a handful of pages with interesting info that every developer reads (coding conventions, how to set up a development environment to build the product, what third party stuff you need to install, etc). This is the sort of place that would be appropriate for a warning against upgrading a third party library.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251587", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2471/" ] }
251,615
In one of many anti-OOP rants on cat-v.org I found a passage by Joe Armstrong raising several objections against the OOP model, one of which was the following: Objection 4 – Objects have private state State is the root of all evil. In particular functions with side effects should be avoided. While state in programming languages is undesirable, in the real world state abounds. I am highly interested in the state of my bank account, and when I deposit or withdraw money from my bank I expect the state of my bank account to be correctly updated. Given that state exists in the real world what facilities should programming language provide for dealing with state? OOPLs say “hide the state from the programmer”. The states is hidden and visible only through access functions. Conventional programming languages (C, Pascal) say that the visibility of state variables is controlled by the scope rules of the language. Pure declarative languages say that there is no state. The global state of the system is carried into all functions and comes out from all functions. Mechanisms like monads (for FPLs) and DCGs (logic languages) are used to hide state from the programmer so they can program “as if state didn’t matter” but have full access to the state of the system should this be necessary. The “hide the state from the programmer” option chosen by OOPLs is the worse possible choice. Instead of revealing the state and trying to find ways to minimise the nuisance of state, they hide it away. What exactly is meant by this? I have very little low level or procedural experience, mostly OOP, so that probably explains how unfamiliar with this I am. And from a more modern standpoint, now that most of the Object-Oriented hysteria is passed (at least as far as I can tell), how accurate/relevant do you guys think that passage is? Thanks for your help.
What exactly is meant by this? In this context, it means that OOP obscures the state of a program by hiding it away from the programmer but still making it visible via (leaky) accessor methods. The argument is that by obscuring the state, it makes it more difficult to work with, and by extension lead to more bugs. how accurate/relevant do you guys think that passage is? I feel that it is relevant, but misdirected. There is absolutely an issue if your class leaks the concept of its state to the outside world. There is absolutely an issue if your class tries to obscure the state when it should be manipulated by the outside world. That though is not a failing of OOP as a whole, but with the individual design of the class. I wouldn't say that hiding state itself is an issue - monads do this in functional programming all the time. In the best of cases, OOP works like the best mix of functional programming and procedural - people can use the class "as if state didn't matter" because the private state used to protect the invariants of the class is hidden, but have freedom to use a well defined public state of the class which abstracts away the details. Does OOP make it harder for people to achieve this best of cases? Possibly. "Java schools" and the whole Shape/Circle/Rectangle or Car has Wheels school of teaching OO probably have more to blame than the approach itself. Modern OOP takes quite a bit from functional programming, encouraging immutable objects and pure functions while discouraging inheritence to help make it easier to design classes that work well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251615", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/144512/" ] }
251,707
Over the last few years, the trend for client-side (browser) applications has really taken off. For my latest project, I have decided to try and move with the times and write a client-side application. Part of this application involves sending transaction emails to users (for example, validate signup, password reset emails, etc.). I am using a third-party API to send the emails. Normally I would have my application running on a server. I would call the third-party API from code on my server. Running a client-side application means this now needs to happen on a user's browser. The third-party API provides the necessary JavaScript files to achieve this. The first glaring issue I can see is I need to use an API key. This would normally be safely stored on my server, but now presumably I will need to provide this key to the client browser. Assuming I can get round this problem, the next problem is what stops a tech-savvy user loading up the JavaScript developers tool on a browser and using the email API anyway they like, rather than say adhering to any rules I have set in the application. I guess my general question is - how can we prevent malicious use of a client-side application?
You can't, and the more people understand this, and the deeper they understand, the better for the world. Code that runs on a device under the user's control cannot be controlled. Smartphones can be jailbroken. Set-top boxes can be cracked. Ordinary browsers don't even attempt to prevent access to JavaScript code. If you have something worth stealing or abusing, a determined attacker will be able to do that unless you validate everything you cherish server-side. Obfuscation is of very little help; the kind of opponent you will attract as soon as anything remotely financial is involved reads assembly language like classified ads. Encryption cannot help you, because the device that would guard the key is the same device you have to assume is cracked. There are many other, seemingly-obvious countermeasures that don't work, for similar reasons. Unfortunately, this is a very inconvenient truth. The world is full of small-time and big-time operators who think they can somehow get around the fundamental brokenness of remote trust, simply because it would be oh so nice if we could assume that our code will be run the way we assumed. And yes, it would make everything so much easier that it isn't even funny. But wishing doesn't make it so, and hoping against hope that you are the one smart cookie who can avoid the unpleasantness will only burn you and your clients. Therefore, get into the mindset that the Internet is enemy territory, include that added cost in your estimates, and you'll be fine. That said, of course there is such a thing as defense in depth. Obfuscating your JavaScript doesn't put off a determined attacker, but it may put off some less-determined attackers. If your assets are worth enough to protect, but not at any cost, any of those measures may add business value to your system; it just can't be perfect. As long as you are fully aware of the trade-off you are making, that may be a reasonable strategy.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251707", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74944/" ] }
251,912
I just recently started learning C++, and as most people (according to what I have been reading) I'm struggling with pointers. Not in the traditional sense, I understand what they are, and why they are used, and how can they be useful, however I can't understand how incrementing pointers would be useful, can anyone provide an explanation of how incrementing a pointer is a useful concept and idiomatic C++? This question came after I started reading the book A Tour of C++ by Bjarne Stroustrup, I was recommended this book, because I'm quite familiar with Java, and the guys over at Reddit told me that it would be a good 'switchover' book.
When you have an array, you can set up a pointer to point to an element of the array: int a[10]; int *p = &a[0]; Here p points to the first element of a , which is a[0] . Now you can increment the pointer to point to the next element: p++; Now p points to the second element, a[1] . You can access the element here using *p . This is different from Java where you would have to use an integer index variable to access elements of an array. Incrementing a pointer in C++ where that pointer does not point to an element of an array is undefined behaviour .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251912", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/144829/" ] }
251,963
I am a programmer with a number of years of experience. I realized I got a certain habit. I'm not sure whether it's really a bad habit or not. I get a list of tasks to perform for a solution, even small small tasks, for example, Change resources of this user control Change size of another one Add some HTML and coding on another user control All of these tasks are small. I mean they can be done within 10 minutes, but I got a bad habit of making small changes and then test them again and again in a web browser . Is this a good practice? Or should I perform them all at once and then test them together? If it is really a bad habit then how do I rectify it, since it feels like wasting time testing small changes over and over?
It's a good practice. You are following the scientific method. If you change several things before any testing, then the testing of each will be more difficult, and perhaps not reliable, since preconditions will be more difficult to prepare and the different changes can interact with each other in manners you didn't foresee. The time you feel you are "wasting" now, you will regain later in the intregration, testing and maintaining stages. Way to go.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251963", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/196874/" ] }
251,993
I'm learning about unit tests, and have a doubt for a test i want to do, to implement an "AND" logic gate A B A^B 0 0 0 0 1 0 1 0 0 1 1 1 how can i test for a method that works like AND gate?, is this what a mock object is? or stub? Thanks, Please provide pseudo code,
It's a good practice. You are following the scientific method. If you change several things before any testing, then the testing of each will be more difficult, and perhaps not reliable, since preconditions will be more difficult to prepare and the different changes can interact with each other in manners you didn't foresee. The time you feel you are "wasting" now, you will regain later in the intregration, testing and maintaining stages. Way to go.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/251993", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/17948/" ] }
252,053
Often open-source software projects have a folder called "contrib". For example, Django has one . What is it for?
It is for software that has been contrib uted to the project, but which might not actually be maintained by the core developers. Naming it "contrib" or "Contrib" is a long established convention, but there's really nothing special about the name, and it's usually only used by fairly large projects.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252053", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15681/" ] }
252,077
In Java specifically, but likely in other languages as well: when would it be useful to have two references to the same object? Example: Dog a = new Dog(); Dob b = a; Is there a situation where this would be useful? Why would this be a preferred solution to using a whenever you want to interact with the object represented by a ?
An example is when you want to have the same object in two separate lists : Dog myDog = new Dog(); List dogsWithRabies = new ArrayList(); List dogsThatCanPlayPiano = new ArrayList(); dogsWithRabies.add(myDog); dogsThatCanPlayPiano.add(myDog); // Now each List has a reference to the same dog Another use is when you have the same object playing several roles : Person p = new Person("Bruce Wayne"); Person batman = p; Person ceoOfWayneIndustries = p;
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252077", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/145002/" ] }
252,224
My team is developing a WEB based finance application and there was a bit of an argument with a colleague where to keep the calculations - purely in back-end or keep some in front-end too? Brief explanation: We are using Java (ZK, Spring) for front-end and Progress 4gl for back-end. Calculations that involve some hardcore math & data from database are kept in back-end, so I'm not talking about them. I'm talking about the situation where the user enters value X, it is then added to the value Y (shown in screen) and the result is shown in the field Z. Pure and simple jQuery-ish operations, I mean. So what would be the best practice here: 1) Add values with JavaScript that saves from going to the back-end and back and then validate them at the back-end "on save"? 2) Keep all the business logic in the same place - therefore bring the values to the back-end and do the calculations there? 3) Do the calculations in the front-end; then send data to the back-end, validate them there, do the calculations again and only if the results are valid and equal, display them to the user? 4) Something else? Note: We do some basic validation in Java but most of it is still in the back-end as all the other business logic. Increase of data that would be sent by recalculating everything in a back-end wouldn't be a problem (small XML size; servers and bandwidth would withstand the increased amount of operations done by users).
As always, such decisions involve a trade-off between different goals, some of which conflict with each other. Efficiency would suggest that you perform calculations in the front-end - both because that way the user's computer uses more power and your server uses less, and because the user sees faster feedback, which improves the user experience. Security demands that any state-changing operations cannot rely on data being checked or computed in the client computer, because the client computer may be under the control of a malicious attacker. Therefore, you must validate anything that comes from untrusted sources server-side. Programming efficiency and maintainability suggests that you shouldn't do the same computation twice because of the wasted effort. Superficially this sounds as if everything has to be done server-side, but that isn't always the case. If you can easily maintain the duplicated code (e.g. by auto-generating a javascript validator from your server-side Java validator), then repeating the computation can be a good solution. If the data involved are all unimportant, e.g. if the user could cheat only themselves and not you if they manipulate values, then server-side validation isn't necessary. If your response time is dominated by completely different bottlenecks so that a round-trip delay is not perceptible, then UX considerations aren't decisive, etc. Therefore you have to consider how strong each of these pressures is in your situation, and decide accordingly.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252224", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124735/" ] }
252,262
I am somewhat new to C# and just found out that: in C# all of the fields and methods in a class are default private. Meaning that this: class MyClass { string myString } is the same as: class MyClass { private string myString } So because they are the same should I ever use the keyword private on fields and methods? Many online code samples use the keyword private in their code, why is that?
Just because you can omit linefeeds and indentations and your C# compiler will still understand what you want to tell it, it's not automatically a good idea to do that. using System; namespace HelloWorld{class Hello{static void Main(){Console.WriteLine ("Hello World!");}}} is much less readable to the human reader than: using System; namespace HelloWorld { class Hello { static void Main() { Console.WriteLine("Hello World!"); } } } And that's the whole point here—programs are written for two audiences: The compiler or interpreter. Yourself and other programmers. The latter is the one who needs to understand and quickly grasp the meaning of what is written. Because of this there are a few good practices that have emerged over the years and are more or less commonly accepted quasi-standards. One of them is putting explicit scope keywords, another example is putting extra parentheses around complex expressions, even in cases where they would not be needed syntactically, like: (2 + 5) & 4
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252262", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/145150/" ] }
252,294
So I am just getting started with .Net WebApi and one thing that I am noticing straight away is that there is no Contract defining how the API looks and should be consumed (Request/Responses from each Action), this is usually in the form of a WSDL for WCF/Soap. It seems to me like this is something that would be very valuable and make life a lot easier for consumers of your API. Is there a reason there isn't one? Is there a programming paradigm or principle that I am unaware of? Is there a way I could create one?
SOAP, REST AND PEOPLE'S CREATIVITY SOAP needs a description document like WSDL because each resource can be consumed with different messages, there are no definition on the protocol about constraints to the possible names/messages that you can manipulate a resource. For example, in SOAP your web service that allow clients manipulate an user can expose the operation that create an user in many different messages, like: addUser createUser insertUser Of course, these are just few sample messages, because I've see a lot of funny web services method names. There are really creative people out there. In other hand, if you are exposing your underlying system using web api that really respect the REST principles, the client just need to know that you have a resource named Users, because there is 99% of chance that you can create an user in this way POST /Users And this occurs for each operation you want to expose using SOAP or a web api REST. Despite being SOAP a protocol, which restricts what you can or can not do, and be REST a style architecture, which leaves many open points of how to do things. There are efforts to define conventions of how to expose and consume REST web apis. DESCRIBING A WEB API REST In the field of how to describe a web api REST I can cite Swagger . It is not a attempt to create a WSDL like to web api REST, but it is a good attempt to create an open standard for describing web apis REST. Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. I use Swagger a lot and really love it, mainly because Swagger UI that allow you generate a nice live console and documentation for your web api. There are many implementations of Swagger for most of languages: C#, Java, Python, Ruby, etc. If you are using ASP .NET Web API, there a some projects to auto generate the Swagger specification, like Swagger.NET GENERATING CLIENTS TO A WEB API REST Because the constraints of REST, like the limited set of verbs (GET, POST, PUT, DELETE, etc) is not so difficuty to generate a client library to a web api REST. Projects like WebApiProxy can easily generate clients do C# and Javascript. CONVENTIONS FOR WEB API REST To keep our lifes as developers easier is good define some conventions of how our web api REST will behave, the best effort I know in this field is the very good Apigee - Web Api Design ebook . The e-book is not an attempt to create a bible or a mantra about how to design your api, but rather a collection of conventions observed in large web REST apis, like Twitter, Facebook, Linkedin, Google, etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252294", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41242/" ] }
252,532
Specifically, I'm writing in JavaScript. Let's say my primary function is Function A. If Function A makes several calls to Function B, but Function B is not used anywhere else, then should I just place Function B within Function A? Is that good practice? Or should I still put Function B at the same scope as Function A?
I'm generally in favor of nested functions, especially in JavaScript. In JavaScript, the only way to limit a function's visibility is by nesting it inside another function. The helper function is a private implementation detail. Putting it at the same scope is akin to making a class' private functions public. If it turns out to be of more general use, it's easy to move out, because you can be confident the helper is currently only used by that one function. In other words, it makes your code more cohesive. You can often eliminate parameters, which makes the function signature and implementation less verbose. This is not the same thing as making variables global. It's more like using a class member in a private method. You can give the helper functions better, simpler names, without worrying about conflicts. The helper function is easier to find. Yes, there are tools that can help you. It's still easier to just move your eyes up the screen a bit. Code naturally forms a sort of tree of abstraction: a few general functions at the root, branching out into several implementation details. If you use an editor with function folding/collapsing, nesting creates a hierarchy of closely-related functions at the same level of abstraction. That makes it very easy to study the code at the level you need and hide the details. I think a lot of opposition comes from the fact that most programmers were either brought up in a C/C++/Java tradition, or were taught by someone else who was. Nested functions don't look natural because we weren't exposed to them much when we were learning to program. That doesn't mean they aren't useful.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252532", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/132016/" ] }
252,732
We have new (quite big) project starting, that we planned to develop using TDD. The idea of TDD failed (many business and non-business reasons), but right now we have a conversation - should we write unit tests anyway, or not. My friend says that there is no (or close to zero) sense in writing unit tests without TDD, we should focus only on integration tests. I believe the opposite, that there is still some sense in writing plain unit tests, just to make code more futureproof. What do you think? Added: I think that this is not a duplicate of >>this question<< - I understand difference between UT and TDD. My question is not about differences , but about sense of writing Unit Tests without TDD.
TDD is used mainly (1) to ensure coverage, (2) and to drive maintainable, understandable, testable design. If you don't use TDD, you don't get guaranteed code coverage. But that in no way means that you should abandon that goal and blithely live on with 0% coverage. Regression tests were invented for a reason. The reason is that in the long run, they save you more time in prevented errors than they take in additional effort to write. This has been proven over and over again. Therefore, unless you are seriously convinced that your organization is much, much better at software engineering than all the gurus who recommend regression testing (or if you plan on going down very soon so that there is no long run for you), yes, you should absolutely have unit tests, for exactly the reason that applies to virtually every other organization in the world: because they catch errors earlier than integration tests do, and that will save you money. Not writing them is like passing up free money just lying around in the street.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252732", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/106019/" ] }
252,748
This is something that's been troubling me for a while now. Is it actually worth unit-testing an API client? Let's say you're creating a small class to abstract-away the calls to a petshop REST API. The petshop is a very simple API, and it has a basic set of methods: listProducts() getProductDetails(ProductID) addProduct(...) removeProduct(ProductID) In testing this, we'd have to either create a mock service or mock the responses. But that seems overkill; I understand that we want to make sure that our methods don't stop working through typo/syntax errors, but since we're writing functions that call remote methods and then we're creating fake responses from those remote methods, it does seem like a waste of effort and that we're testing something that can't really fail. Worse, if the remote method changes our unit tests will pass while production use fails. I'm pretty sure I'm missing something, or I've got the wrong end of the stick, or I'm not seeing the wood for the trees. Can someone set me on the right track?
The job of a remote API client is to issue certain calls - no more, no less. Therefore, its test should verify that it issues those calls - no more, no less. Sure, if the API provider changes the semantics of their responses, then your system will fail in production. But that isn't your client class's fault; it's something that can only be caught in integration tests. By relying on code not under your control you have given up the ability to verify correctness via internal testing - it was a trade-off, and this is the price. That said, testing a class that consists only of delegations to another class may be low-priority, because there is comparatively little risk of complex errors. But that goes for any class that consists only of uniform one-liners, it has nothing to do with calling out into another vendor's code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252748", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23059/" ] }
252,938
Imagine a long and complicated process, which is started by calling function foo() . There are several consecutive steps in this process, each of them depending on result of the previous step. The function itself is, say, about 250 lines. It is highly unlikely that these steps would be useful on their own, not as a part of this whole process. Languages such as Javascript allow creating inner functions inside a parent function, which are inaccessible to outer functions (unless they are passed as a parameter). A colleague of mine suggest that we can split up the content of foo() into 5 inner functions , each function being a step. These functions are inaccessible to outer functions (signifying their uselessness outside of the parent function). The main body of foo() simply calls these inner function one by one: foo(stuff) { var barred = bar(stuff); var bazzed = baz(barred); var confabulated = confabulate(bazzed); var reticulated = reticulate(confabulated); var spliced = splice(reticulated); return spliced; // function definitions follow ... } Is this an anti-pattern of some sort? Or is this a reasonable way to split up long functions? Another question is: is this acceptable when using OOP paradigm in Javascript? Is it OK to split an object's method this way , or does this warrant another object, which contains all these inner functions as private methods? See also: Is it OK to split long functions and methods into smaller ones even though they won't be called by anything else? - a previous question of mine, which leads into this one. @KilianFoth's answer to a very similar question which provides a different perspective compared to the answers given here.
Inner functions are not an anti-pattern, they are a feature. If it doesn't make sense to move the inner functions outside, then by all means, don't. On the other hand, it would be a good idea to move them outside so you can unit test them easier. (I don't know if any framework lets you test inner functions.) When you have a function with 250+ lines, and you make any changes in the logic, are you sure you're not breaking anything? You really need unit tests there, and it won't be feasible with one giant function. Splitting up long functions to smaller ones is a good idea in general. It's a common refactoring technique to replace a comment with a function named after the comment, called "extract method" . So yes, do it! As @Kilian pointed out, see also this related post: Should I place functions that are only used in one other function, within that function?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252938", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/89095/" ] }
252,955
I have a Java factory method with a varargs array of Object s at the end. The array can contain any combination of String s and ScaledJpeg s. The theory being that an HTML table cell can contain any number of text nodes or <image> nodes and it will line-wrap the images as though they were just funny-shaped words. // constructor public Cell(TextStyle ts, float w, CellStyle cs, Object... r) { if (w < 0) { throw new IllegalArgumentException("A cell cannot have a negative width"); } if (r != null) { for (Object o : r) { if ( (o != null) && !(o instanceof String) && !(o instanceof ScaledJpeg) ) { throw new IllegalArgumentException(INVALID_ROW_TYPE_STR); } } } textStyle = ts; width = w; cellStyle = cs; rows = r; avgCharsForWidth = (int) ((width * 1220) / textStyle.avgCharWidth()); } In Scala, I might use a list of Either[String,ScaledJpeg] . But some day I'll probably make this method allow something else, say another nested Cell , but doing so would break existing code that only expected two possible types. At least the way it is, future changes won't break existing code. The current solution works, allows for expansion, and is relatively easy to use, except that it defeats type safety and can only throw an exception at runtime if someone passes something unexpected as a vararg. For instance, I just forgot to add toString() to a StringBuilder and it blew up at runtime. So already that's a third class to account for (Since toString() is defined on every object, I don't want to call it on whatever is passed because doing so for most object types would be an error much further down in the code violating the fail-fast behavior of this example). java.lang.String can't implement any new interfaces and that's probably a good thing. Even if I could make a wrapper class, everything about the way text and images are treated by this class is completely different except that they are line-wrapped together the way they would be in HTML (or in a Word doc). As I write this, it occurs to me that I should probably use a Builder pattern, make a CellBuilder class that has addText() and addImage() methods that does away with the varargs constructor. This would encapsulate the underlying List in a way that the user of the API gets 100% compile-time type safety. But out of curiosity, I'm still posting this in case there are other creative and possibly better solutions out there.
Inner functions are not an anti-pattern, they are a feature. If it doesn't make sense to move the inner functions outside, then by all means, don't. On the other hand, it would be a good idea to move them outside so you can unit test them easier. (I don't know if any framework lets you test inner functions.) When you have a function with 250+ lines, and you make any changes in the logic, are you sure you're not breaking anything? You really need unit tests there, and it won't be feasible with one giant function. Splitting up long functions to smaller ones is a good idea in general. It's a common refactoring technique to replace a comment with a function named after the comment, called "extract method" . So yes, do it! As @Kilian pointed out, see also this related post: Should I place functions that are only used in one other function, within that function?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252955", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62323/" ] }
252,977
I'm working on learning Haskell, and I've come across three different ways of dealing with errors in functions I write: I can simply write error "Some error message." , which throws an exception. I can have my function return Maybe SomeType , where I may or may not be able to return what I'd like to return. I can have my function return Either String SomeType , where I can return either an error message or what I was asked to return in the first place. My question is: Which method of dealing with errors should I use, and why? Maybe I should different methods, depending on the context? My current understanding is: It's "difficult" to deal with exceptions in purely functional code, and in Haskell one wants to keep things as purely functional as possible. Returning Maybe SomeType is the proper thing to do if the function will either fail or succeed (i.e., there aren't different ways it can fail ). Returning Either String SomeType is the proper thing to do if a function can fail in any one of various ways.
Alright, first rule of error handling in Haskell: Never use error . It's just terrible in every way. It exists purely as an act of history and the fact that Prelude uses it is terrible. Don't use it. The only conceivable time you could use it is when something is so internally terrible that something must be wrong with the very fabric of reality, thus rendering the outcome of your program moot. Now the question becomes Maybe vs Either . Maybe is nicely suited to something like head , which may or may not return a value but there is only one possible reason to fail. Nothing is saying something like "it broke, and you already know why". Some would say it indicates a partial function. The most robust form of error handling is Either + an error ADT. For example in one of my hobby compilers, I have something like data CompilerError = ParserError ParserError | TCError TCError ... | ImpossibleError String data ParserError = ParserError (Int, Int) String data TCError = CouldntUnify Ty Ty | MissingDefinition Name | InfiniteType Ty ... type ErrorM m = ExceptT CompilerError m -- from MTL Now I define a bunch of error types, nesting them so that I have one glorious top level error. This can be an error from any stage of compilation or an ImpossibleError , which signifies a compiler bug. Each of these error types try to keep as much information for as long as possible for pretty printing or other analysis. More importantly, by not having a string I can test that running an illtyped program through the type checker actually generates a unification error! Once something is a String , it's gone forever and any information it contained is opaque to the compiler/tests, so just Either String isn't great either. Finally I pack this type into ExceptT , a new monad transformer from MTL. This is essentially EitherT and comes with a nice batch of functions for throwing and catching errors in a pure, pleasant way. Finally, it's worth mentioning that Haskell has the mechanisms to support handling exceptions like other languages do, except that catching an exception lives in IO . I know some people like to use these for IO heavy applications where everything could potentially fail, but so infrequently that they don't like to think about it. Whether you use these impure exceptions or just ExceptT Error IO is really a matter of taste. Personally I opt for ExceptT because I like being reminded of the chance of failure. As a summary, Maybe - I can fail in one obvious way Either CustomType - I can fail, and I'll tell you what happened IO + exceptions - I sometimes fail. Check my docs to see what I throw when I do error - I hate you too user
{ "source": [ "https://softwareengineering.stackexchange.com/questions/252977", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/145852/" ] }
253,027
In No Silver Bullet , Fred Brooks makes a variety of predictions about the future of software engineering, best summed up by: There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity. His argument is very convincing. Brooks was writing in 1986: was he right? Do developers in 2014 produce software at a rate less than 10x faster than their counterparts in 1986? By some appropriate metric -- how large has the gain in productivity actually been?
Do developers in 2014 produce software at a rate less than 10x faster than their counterparts in 1986? I would imagine that there's been at least an order of magnitude improvement in productivity since then. But not by leveraging one single development, in either technology or in management technique. Increases in productivity have come about by a combination of factors. Note: This is not a comprehensive list: Better compilers Vastly more powerful computers Intellisense Object orientation Functional orientation Better memory management techniques Bounds checking Static code analysis Strong typing Unit Testing Better programming language design Code generation Source code control systems Code reuse And so on. All of these techniques combine to produce productivity gains; there isn't a single silver bullet that has ever produced an order of magnitude speedup. Note that some of these techniques have existed since the sixties, but have only observed widespread recognition and adoption recently.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253027", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100569/" ] }
253,090
One day I went to a Stack Overflow chat and saw a phrase, that was stating that inheritance, encapsulation, and polymorphism are the pillars of OOP (in the sense that they are fundamental, a construction sole). Also, there's a similar question, that I have been asked very often on college exams and job interviews, and the right answer always was the statement pronounced in the title of the question ("Yes, inheritance, encapsulation, and polymorphism are the pillars of OOP). But in the Stack Overflow chat I was severely ridiculed, participants strongly disagreed with such a statement. So, what's wrong with this statement? Do programmers seem to be trained in different things in post-Soviet and United States colleges? Are inheritance, encapsulation, and polymorphism not considered to be the pillars of OOP by US/UK programmers?
Are inheritance, encapsulation and polymorphism are not considered to be the pillars of OOP by US/UK programmers? They are considered to be pillars by many programmers, and many colleges teach OO that way. Unfortunately, it is also a shortsighted view. Inheritance is but one mechanism used to implement OOP and can be abused to not do OOP. Encapsulation is a concept, useful for programming of all sorts, OOP and not. Polymorphism is a... trait(?) to describe how computation behaves. There are many ways to achieve polymorphism, not all of which are OO specific. OOP has very little foundation, since in reality, it's very conceptual: "Approach the design of your program by thinking of things as objects - cohesive bundles of data and functionality." And while modern program design takes a poor view of doing things in a "purely OO fashion", most skilled programmers will agree that the SOLID principles (or some subset) are better candidates for the "pillars of Object Oriented Programming" (even though they apply well to non-OOP). These don't work with these terms at all. Instead they use the concepts of software entities (of which, objects are one), interfaces (of which, a C#/Java/etc. interface is one), abstraction and sub-typing (of which, inheritance is one form).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253090", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/97788/" ] }
253,126
I am a very solid Relational Database guy and understand all the way to 3rd normal form, appreciate the algebraic set theory roots of SQL, and can probably relationalize a broken heart (or not). I haven't figured out a relational database structure FOR date nights with my wife, but I HAVE thought about relational database projects ON date nights with my wife.. Now I'm hearing about NoSQL, and researching it. Cutting to the chase, is there anything about NoSQL that is ground breaking, mathematically novel, or a "hey you don't even really need to organize your data relationally, this is so much easier" type of approach? Is NoSQL like a super shell to the data structure? In my mind, data must ultimately have structure to be retrieved and the retrieval must be defined in a language of some sort.
NoSQL is more evolutionary than revolutionary. It essentially combines the existing ideas of "external database storage" with "using familiar data structures, not relational tables." There are more types of databases than relational, for example hierarchical databases . While archaic by today's standards, it meshed really well with the data structures of its data (e.g. COBOL records). The point is, the data in the database was modeled closely to how records were laid out in the programming languages that used them. Fast forward to the invention of relational databases , where finally the database separated concerns and, when properly normalized, is a great way to visualize most types of data and relationships between data. It is really easy to understand compared to other types of databases. What it utterly fails at, however, is storing data in a way that mirrors objects and classes in a program. Hence, the invention of object-relational mapping . In other words, the design of the database is actually a hindrance to the design of the program that uses it, which is why we need ORM libraries such as Hibernate . While clean and consistent, there is always that nagging doubt in the back of my mind that something is not quite right there. This gave rise to two more types of databases, object databases and NoSQL . Both attempt to solve the issues introduced by relational databases while not exposing us to the mind-bending horrors of hierarchical databases. Data is still laid out in repositories that vaguely resemble tables, but in actuality are more like programming data structures than relational tables. While object databases follow mostly well-defined rules, my understanding is that NoSQL is rather arbitrary. For example, a table might be visualized as a hash table or an array. There is not an easy, well-defined way to query them using an arbitrary tool analogous to Oracle SQL Developer or SQL Server Management Studio . The idea is that one may define data structures that are easily searched in code, rather than piecing together SQL queries that are better-suited to a SQL database engine rather than expressing the query one desires. For example, fuzzy or partial matches are more difficult and perform worse in a relational database, while a NoSQL database may have a structure that is optimized for such a search and completes in a fraction of the time. There are languages for querying NoSQL. However, there is no universal language such as what SQL is for relational databases. Late Edit: While I am familiar enough with NoSQL databases, this question was the impetus for me to buy a quality book on the topic and to start reading it with the eventual goal of being a real expert on the topic. The remaining comments are based on NoSQL Distilled: a Brief Guide to the Emerging World of Polyglot Persistence by Pramod Sadalage and Martin Fowler . The authors state that relational databases do not scale well to clusters capable of serving the data needed for sites such as Amazon and Google: NoSQL was developed to fit this niche, relaxing the concurrency and durability in ACID in order to server large number of queries that largely use static data (hence, ACID transactions are not as important). Furthermore, they posit that NoSQL databases operate without a schema (page 10) which allows NoSQL databases to modify the structure of data more easily. I am not sure that the presence or absence of a formal schema matters in this regard, since SQL databases allow modifying schemas as well. Regardless, the two renowned authors make the claim so it is worth examining. I believe that both of these main points serve only to enforce my primary point that NoSQL is evolutionary, not revolutionary. They still store data, and make incremental improvements to the scale and modifiability. They also make the point that NoSQL does not seek to usurp relational databases as the king of data storage, only to provide an alternative means of data storage for the types of data that need to scale and morph in a way that (they believe) relational databases do not support well enough.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253126", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139655/" ] }
253,254
I have seen the history of several С# and Java class library projects on GitHub and CodePlex, and I see a trend of switching to factory classes as opposed to direct object instantiation. Why should I use factory classes extensively? I have pretty good library, where objects are created the old-fashioned way - by invoking public constructors of classes. In the last commits the authors quickly changed all of the public constructors of thousands of classes to internal, and also created one huge factory class with thousands of CreateXXX static methods that just return new objects by calling the internal constructors of the classes. The external project API is broken, well done. Why would such a change be useful? What is the point of refactoring in this way? What are the benefits of replacing calls to public class constructors with static factory method calls? When should I use public constructors, and when should I use factories?
Factory classes are often implemented because they allow the project to follow the SOLID principles more closely. In particular, the interface segregation and dependency inversion principles. Factories and interfaces allow for a lot more long term flexibility. It allows for a more decoupled - and therefore more testable - design. Here is a non-exhaustive list of why you might go down this path: It allows you to introduce an Inversion of Control (IoC) container easily It makes your code more testable as you can mock interfaces It gives you a lot more flexibility when it comes time to change the application (i.e. you can create new implementations without changing the dependent code) Consider this situation. Assembly A (-> means depends upon): Class A -> Class B Class A -> Class C Class B -> Class D I want to move Class B to Assembly B, which is dependent on Assembly A. With these concrete dependencies I have to move most of my entire class hierarchy across. If I use interfaces, I can avoid much of the pain. Assembly A: Class A -> Interface IB Class A -> Interface IC Class B -> Interface IB Class C -> Interface IC Class B -> Interface ID Class D -> Interface ID I can now move class B across to assembly B with no pain whatsoever. It still depends on the interfaces in assembly A. Using an IoC container to resolve your dependencies allows you even more flexibility. There is no need to update each call to the constructor whenever you change the dependencies of the class. Following the interface segregation principle and the dependency inversion principle allows us to build highly flexible, decoupled applications. Once you have worked on one of these types of applications you will never again want to go back to using the new keyword.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253254", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134483/" ] }
253,306
I was explaining a proposed build system (Gradle/Artifactory/Jenkins/Chef) to one of our senior architects, and he made a comment to me that I sort of disagree with, but am not experienced enough to really weigh-in on. This project builds a Java library (JAR) as an artifact to be reused by other teams. For versioning, I'd like to use the semantic approach of: <major>.<minor>.<patch> Where patch indicates bug/emergency fixes, minor indicates backwards-compatible releases, and major indicates either massive refactorings of the API and/or backwards-incompatible changes. As far as delivery goes here is what I want: a developer commits some code; this triggers a build to a QA/TEST environment. Some tests are ran (some automated, some manual). If all tests pass, then a production build publishes the JAR to our in-house repo. By this point the JAR should be versioned properly, and my thinking was to use the build.number that is automatically generated and provided by our CI tool to act as the patch number. Thus, the versioning would actually be: <major>.<minor>.<build.number> Again, where build.number is provided by the CI tool. The architect dismissed this, saying that using the CI build number was an "abuse" of semantic versioning. My question is: is this correct, and if so, why? And if not, why not?
Your build number won't be reset to 0, when minor and major versions increase, this violates sections 7 and 8 of the specs : Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. It MAY be incremented if substantial new functionality or improvements are introduced within the private code. It MAY include patch level changes. Patch version MUST be reset to 0 when minor version is incremented. Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. It MAY include minor and patch level changes. Patch and minor version MUST be reset to 0 when major version is incremented. So, version numbers (major, minor, patch) must be provided manually, as these are used to tell your users about changes in one place without them having to look at your changelog or some other document. If you want to include your build number, then you may append them after a + (section 10): Build metadata MAY be denoted by appending a plus sign and a series of dot separated identifiers immediately following the patch or pre-release version. Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-]. Identifiers MUST NOT be empty. Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence. Examples: 1.0.0-alpha+001, 1.0.0+20130313144700, 1.0.0-beta+exp.sha.5114f85.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253306", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37136/" ] }
253,327
I know we have some extra advantages in using the cookies over IP address, but my question is Why can't the container just remember the IP address of the client in identifying the client when he visit his site again? Is it possible for the container to remember the client with the help of IP address?
A client is identified by a cookie as well as the IP address. However, the IP address cannot be used exclusively: What if two clients are located behind the same NAT firewall or proxy? They will have the same external IP address to the server. What if a user has two different browsers open on the same machine, and wants two separate sessions (maybe for testing?) A user may have a dynamic IP address which conceivably could change during a session. An attacker may be able to spoof an IP address and take over a session if it relied on IP address alone. This means an IP address does not uniquely identify a client in all cases.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253327", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/95092/" ] }
253,454
Given the amount of material that tries to explain what a context-free grammar (CFG) is, I found it surprising that very few (in my sample, less than 1 in 20) give an explanation on why such grammars are called "context-free". And, to my mind, none succeeds in doing so. My question is, why are context-free grammars called context-free? What is "the context"? I had an intuition that the context could be other language constructs surrounding the currently analyzed construct, but that seems not to be the case. Could anyone provide a precise explanation?
It means all of its production rules have a single non-terminal on their left hand side. For example, this grammar which recognizes strings of matched parentheses ("()", "()()", "(())()", ...) is context-free: S → SS S → (S) S → () The left-hand side of every rule consists of a single non-terminal (in this case it's always S , but there could be more.) Now consider this other grammar which recognizes strings of the form {a^n b^n c^n : n >= 1} (e.g. "abc", "aabbcc", "aaabbbccc"): S → abc S → aSBc cB → WB WB → WX WX → BX BX → Bc bB → bb If the non-terminal B is preceded by the terminal/literal character c , you rewrite that term to WB but if it's preceded by b , you expand to bb instead. This is presumably what the context-sensitivity of context-sensitive grammars is alluding to. A context-free language can be recognized a push-down automaton . Whereas a finite state machine makes use of no auxiliary storage, i.e. its decision is based only on its current state and input, a push-down automaton also has a stack at its disposal and can peek at the top of the stack for taking decisions. To see that in action, you can parse nested parentheses by moving left to right and pushing a left parentheses onto a stack each time you encounter one, and popping each time you encounter a right parentheses. If you never end up trying to pop from an empty stack, and the stack's empty at the end of the string, the string is valid. For a context-sensitive language, a PDA isn't enough. You'll need a linear-bounded automaton which is like a Turing Machine whose tape isn't unlimited (though the amount of tape available is proportional to the input). Note that that describes computers pretty well - we like to think of them as Turing Machines but in the real world you can't grab arbitrarily more RAM mid-program. If it's not obvious to you how an LBA is more powerful than a PDA, an LBA can emulate a PDA by using part of its tape as a stack, but it can also choose to use its tape in other ways. (If you're wondering what a Finite State Machine can recognize, the answer is regular expressions. But not the regexes on steroids with capture groups and look-behind/look-ahead you see in program languages; I mean the ones you can build with operators like [abc] , | , * , + , and ? . You can see that abbbz matches regex ab*z just by keeping your current position in the string and regex, no stack required.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253454", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63610/" ] }
253,574
What is throw-away prototyping model in software engineering and why do we need it? How does it differentiate from Evolutionary Prototyping?
Throw-away prototyping Throw-away prototyping is about creating, as fast as possible, a part of the future application to either ensure a feature is technically feasible or to show the feature to stakeholders or potential users in order to gather feedback from them. Since the source code of this prototype is not reused later when developing the application itself, it makes it a throw-away prototype. Knowing that it's a throw-away code helps focusing on the actual feature, while leaving aside aspects such as maintainability of the code, style, design patterns or testing. This makes it possible to finish the prototype very fast, without affecting negatively the technical debt of the final product. Throw-away prototyping is different from sketching . Sketching is more graphical and oriented towards user interfaces and user experience, and doesn't consist of writing programming code. Throw-away prototyping is usually used when sketching is not enough (for example when you need to show how a feature will perform on different smartphones or when you need to show the actual performance and responsiveness). Throw-away prototyping can present a risk when dealing with stakeholders without technical background and in a context of tight deadlines and very limited resources: stakeholders may try to convince you to reuse in the final product the source code from the prototype. It is natural to believe that it will shorten the time required to release the product, but actually, it will only delay the shipment date. One way to prevent this is to use for the prototype a language which can't be used in production (for example use C# when you know that the final product will be hosted on Linux servers with only Python installed). Fig. 1: Throw-away prototyping and sketching : prototypes help gathering early feedback before starting developing the actual feature. Evolutionary prototyping Evolutionary prototyping consists of building a prototype which is then refined based on the regular feedback from the stakeholders or potential users. Here, maintainability of the code, style, design patterns or testing count from the beginning, which makes it possible to evolve the prototype into a fully featured, enterprise-grade product. The earlier steps of the prototype contain only the core part of the future product, and then, features are added progressively. Fig. 2: Evolutionary prototyping : features are aggregated to the prototype to build the final product. Evolutionary prototyping is different from incremental prototyping . Incremental prototyping consists of building several prototypes, each one representing a part of the future system, and then combine them. Evolutionary prototyping is closer to Agile: often, you will be able to obtain early a working product with limited features and extend it until stakeholder have money. Incremental prototyping, on the other hand, is better suited for large projects with many contributing teams, each team working on a separate prototype. Fig. 3: Incremental prototyping : several prototypes are combined to form the final product. Evolutionary prototyping is different from Agile methodologies too. Agile is about iterations and frequent milestones where a fully functional product can be released to manufacturing. If you have a working product every Thursday, you're doing Agile. In evolutionary prototyping, you expand the prototype, but nothing forces you to have a fully-functional product regularly. You can spend two months creating the first prototype, than expand it with a few features in respectively two and three days, and then spend three months on another feature. You can't have this sort of irregular patterns in Agile. Specific Agile methodologies enforce additional rules. For example, if you don't do pair programming, you can't assert that you're doing Extreme Programming. If your team don't have daily meetings, you're not doing Scrum.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253574", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/146429/" ] }
253,635
This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.
You're right, performance in business apps is not really an important subject the way it is discussed by most programmers . Usually, performance-related discussions I hear from programmers have several issues: They are mostly premature optimization . Usually, someone wants "the fastest way" to do an operation with no apparent reason, and ends up either making code changes which are made by most compilers anyway (such as replacing division by multiplication or inlining a method), or spending days making changes which will help gaining a few microseconds at runtime. They are often speculative . I'm glad to see that on Stack Overflow and Programmers.SE, profiling is mentioned frequently when the question is related to performance, but I'm also disappointed when I see two programmers who don't know what profiling are discussing about performance-related changes they should do in their code. They believe the changes will make everything faster, but practically every time, it will either have no visible effect or slow the things down, while a profiler would have pointed them to another part of the code which can easily be optimized and which wastes 80% of the time. They are focused on technical aspects only. Performance of user-oriented applications is about the feeling: does it feel fast and responsive, or does it feel slow and clunky? In this context, performance problems are usually solved much better by user experience designers: a simple animated transition may often be the difference between an app which feels terribly slow and the app which feels responsive, while both spend 600 ms. doing the operation. They are based on subjective elements even when they are related to technical constraints. If it's not the question of feeling fast and responsive, there should be a non-functional requirement which specifies how fast an operation should be performed on specific data, running on specific system . In reality, it happens more like that: the manager tells that he finds something slow, and then, developers need to figure out what does that mean. Is it slow like in "it should be below 30 ms. while currently, it wastes ten seconds", or slow like "we can maybe lower the duration from ten to nine seconds"? Early during my career as a programmer, I was working on a piece of software for a bunch of my customers. I was convinced this software is the next great thing which will bring happiness to the world, so I was obviously concerned by performance. I've heard terms such as "profiling" or "benchmark", but I didn't know what they mean and couldn't care less. Moreover, I was too focused reading the book about C, and especially the chapter where optimization techniques were discussed. When I discovered that computers perform multiplication faster than division, I replaced division by multiplication anywhere I could. When I discovered that calling a method can be slow, I combined as much methods as I could, as if the previous 100 LOC methods weren't already an issue. Sometimes, I spent nights doing changes which, I was convinced, made difference between a slow app nobody wants, and a fast one everybody wants to download and use. The fact that two actual customers who were interested by this app requested actual features wasn't bothering me: "Who would want a feature, if the app is slow?"—I thought. Finally, the only two customers stopped using the app. It wasn't amazingly fast despite all my efforts, mostly because when you don't know what indexes are and your app is database-intensive, there is something wrong. Anyway, when I was doing just another performance-related change, which was improving by a few microseconds the execution of code which is used once per month, customers didn't see changes. What they were seeing is that user experience is terrible, documentation is missing, crucial features they were requesting for months were not here and the number of bugs to solve was constantly growing. Result: I hoped this app will be used by thousands of companies around the world, but today, you won't find any information about this application on the internet. The only two customers abandoned it, and the project was abandoned as well. It was never marketed, never publicly advertised, and today, I'm not even sure I can compile it on my PC (nor find the original sources). This wouldn't have happened if I was focusing more on things that actually matter. This being said, performance in general is important : In non-business apps, it can become crucial. There is embedded software , software ran on servers (when you have a few thousand of requests per second, which is not that big, performance starts to be a concern), software ran on smartphones , video games , software for professionals (try to handle a 50 GB file in Photoshop on a not very fast machine to be convinced) and even ordinary software products which are sold to lots of people (if Microsoft Word spends twice its time to do every operation it does, the time lost multiplied by the number of users will become an issue). In business apps, there are many cases where an application which feels and is slow will be hated by the users. You don't want that, making performance on of your concerns.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253635", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
253,656
I am analyzing some running times of different for-loops, and as I'm getting more knowledge, I'm curious to understand this problem which I have still yet to find out. I have this exercise called "How many stars are printed": for (int i = N; i > 1; i = i/2) System.out.println("*"); The answers to pick from is A: ~log N B: ~N C: ~N log N D: ~0.5N^2 So the answer should be A and I agree to that, but on the other side.. Let's say N = 500 what would Log N then be? It would be 2.7. So what if we say that N=500 on our exercise above? That would most definitely print more han 2.7 stars? How is that related? Because it makes sense to say that if the for-loop looked like this: for (int i = 0; i < N; i++) it would print N stars. I hope to find an explanation for this here, maybe I'm interpreting all these things wrong and thinking about it in a bad way. Thanks in advance.
You've overlooked the key characteristic of the logarithm base . Because i is divided by 2 in each iteration, the running time is logarithmic with base 2. And log 2 (500) ~ 8.9 What you are looking at is log 10 (500) ~ 2.7 (logarithm with base 10) By the way, the reason why the base is often omitted in runtime discussions (and your calculator probably doesn't have a button for log 2 ) is that due to the mechanisms of logarithmic math, a different base corresponds to a constant factor and thus is not relevant when you're ignoring constant factors anyway. It can be calculated easily: log a (x) = log b (x) / log b (a)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253656", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/146300/" ] }
253,704
Assuming a language with some inherent type safety (e.g., not JavaScript): Given a method that accepts a SuperType , we know that in most cases wherein we might be tempted to perform type testing to pick an action: public void DoSomethingTo(SuperType o) { if (o isa SubTypeA) { o.doSomethingA() } else { o.doSomethingB(); } } We should usually, if not always, create a single, overridable method on the SuperType and do this: public void DoSomethingTo(SuperType o) { o.doSomething(); } ... wherein each subtype is given its own doSomething() implementation. The rest of our application can then be appropriately ignorant of whether any given SuperType is really a SubTypeA or a SubTypeB . Wonderful. But, we're still given is a -like operations in most, if not all, type-safe languages. And that suggests a potential need for explicit type testing. So, in what situations, if any, should we or must we perform explicit type testing? Forgive my absent mindedness or lack of creativity. I know I've done it before; but, it was honestly so long ago I can't remember if what I did was good! And in recent memory, I don't think I've encountered a need to test types outside my cowboy JavaScript.
"Never" is the canonical answer to "when is type testing okay?" There's no way to prove or disprove this; it is part of a system of beliefs about what makes "good design" or "good object-oriented design." It's also hokum. To be sure, if you have an integrated set of classes and also more than one or two functions that need that kind of direct type testing, you're probably DOING IT WRONG. What you really need is a method that's implemented differently in SuperType and its subtypes. This is part and parcel of object-oriented programming, and the whole reason classes and inheritance exist. In this case, explicitly type-testing is wrong not because type testing is inherently wrong, but because the language already has a clean, extensible, idiomatic way of accomplishing type discrimination, and you didn't use it. Instead, you fell back to a primitive, fragile, non-extensible idiom. Solution: Use the idiom. As you suggested, add a method to each of the classes, then let standard inheritance and method-selection algorithms determine which case applies. Or if you can't change the base types, subclass and add your method there. So much for the conventional wisdom, and on to some answers. Some cases where explicit type testing makes sense: It's a one-off. If you had a lot of type discrimination to do, you might extend the types, or subclass. But you don't. You have just one or two places where you need explicit testing, so it's not worth your while to go back and work through the class hierarchy to add the functions as methods. Or it's not worth the practical effort to add the kind of generality, testing, design reviews, documentation, or other attributes of the base classes for such a simple, limited usage. In that case, adding a function that does direct testing is rational. You can't adjust the classes. You think about subclassing--but you can't. Many classes in Java, for instance, are designated final . You try to throw in a public class ExtendedSubTypeA extends SubTypeA {...} and the compiler tells you, in no uncertain terms, that what you're doing is not possible. Sorry, grace and sophistication of the object oriented model! Someone decided you can't extend their types! Unfortunately, many of the standard library are final , and making classes final is common design guidance. A function end-run is what's left available to you. BTW, this isn't limited to statically typed languages. Dynamic language Python has a number of base classes that, under the covers implemented in C, cannot really be modified. Like Java, that includes most of the standard types. Your code is external. You are developing with classes and objects that come from a range of database servers, middleware engines, and other codebases you can't control or adjust. Your code is just a lowly consumer of objects generated elsewhere. Even if you could subclass SuperType , you're not going to be able to get those libraries on which you depend to generate objects in your subclasses. They're going to hand you instances of the types they know, not your variants. This isn't always the case...sometimes they are built for flexibility, and they dynamically instantiate instances of classes that you feed them. Or they provide a mechanism to register the subclasses you want their factories to construct. XML parsers seem particularly good at providing such entry points; see e.g. a JAXB example in Java or lxml in Python . But most code bases do not provide such extensions. They're going to hand you back the classes they were built with and know about. It generally will not make sense to proxy their results into your custom results just so you can use a purely object-oriented type selector. If you're going to do type discrimination, you're going to have to do it relatively crudely. Your type testing code then looks quite appropriate. Poor person's generics/multiple dispatch. You want to accept a variety of different types to your code, and feel that having an array of very type-specific methods isn't graceful. public void add(Object x) seems logical, but not an array of addByte , addShort , addInt , addLong , addFloat , addDouble , addBoolean , addChar , and addString variants (to name a few). Having functions or methods that take a high super-type and then determine what to do on a type-by-type basis--they're not going to win you the Purity Award at the annual Booch-Liskov Design Symposium, but dropping the Hungarian naming will give you a simpler API. In a sense, your is-a or is-instance-of testing is simulating generic or multi-dispatch in a language context that doesn't natively support it. Built-in language support for both generics and duck typing reduce the need for type checking by making "do something graceful and appropriate" more likely. The multiple dispatch / interface selection seen in languages like Julia and Go similarly replace direct type testing with built-in mechanisms for type-based selection of "what to do." But not all languages support these. Java e.g. is generally single-dispatch, and its idioms are not super-friendly to duck typing. But even with all these type discrimination features--inheritance, generics, duck typing, and multiple-dispatch--it's sometimes just convenient to have a single, consolidated routine that makes the fact that you are doing something based on the type of the object clear and immediate. In metaprogramming I have found it essentially unavoidable. Whether falling back to direct type inquires constitutes "pragmatism in action" or "dirty coding" will depend on your design philosophy and beliefs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253704", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/94768/" ] }
253,868
We've been in the process of changing how our AS3 application talks to our back end and we're in the process of implementing a REST system to replace our old one. Sadly the developer who started the work is now on long term sick leave and it's been handed over to me. I've been working with it for the past week or so now and I understand the system, but there's one thing that's been worrying me. There seems to be a lot of passing of functions into functions. For example our class that makes the call to our servers takes in a function that it will then call and pass an object to when the process is complete and errors have been handled etc. It's giving me that "bad feeling" where I feel like it's horrible practice and I can think of some reasons why but I want some confirmation before I propose a re-work to system. I was wondering if anyone had any experience with this possible problem?
It isn't a problem. It is a known technique. These are higher order functions (functions that take functions as parameters). This kind of function is also a basic building block in functional programming and is used extensively in functional languages such as Haskell . Such functions are not bad or good - if you have never encountered the notion and technique they can be difficult to grasp at first, but they can be very powerful and are a good tool to have in your toolbelt.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253868", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/146711/" ] }
253,925
As recently reported here : Xamarin has forked Cocos2D-XNA, a 2D/3D game development framework, creating a cross-platform library that can be included in PCL projects. However the founder of the project that was forked says : The purpose of the MIT license is to unencumber your fair use. Not to encourage you to take software, rebrand it as your own, and then "take it in a new direction" as you say. While not illegal, it is unethical. It seems that the GitHub page of the new project doesn't even indicate that it's a fork in a typical GitHub manner, opting for an easily-removable History section instead (see bottom). So my questions are: Was Xamarin's action and the way the action was done ethical or not? Is it possible to avoid such a situation if you are a single developer or a small unfunded group of developers? I am hoping this could be either a wiki question or there will be some objective answers grounded on modern OSS ethics/philosophy.
Releasing a project under the MIT license is giving people permission to fork the project. Part of the philosophy behind free software is to give users and developers the right to use, modify, and release the software in ways that wouldn't normally be allowed. If you don't want people to do this, then don't use the MIT license. You can't really complain when people use code under the terms of the license that you've given them. Forks are a fairly normal thing to happen in the free software community. It looks like the fork's developers tried to contribute to the original project, and didn't agree so they contributed to their own project instead. Free software encourages this so that developers are not prevented from modifying software because the owners don't like their changes. Also, by releasing something under a free software license, you're benefiting from other people's contributions, contributions you might not have received if it was under a different license. If you accept contributions under a license, you should respect the terms of the license yourself. One way to maintain control over an official version is to rely on things like trademarks. Mozilla Corporation for example has a trademark on Firefox which allows them to dictate what people can do with Firefox even though it's open source (See Iceweasel for the fork of this). Other licenses like LGPL still allow forks, but keep the code open. This way, you can at least incorporate any changes from the fork into your original project, and benefit from the development on the fork. LGPL code can use any MIT licensed code so if you wanted more control over the project, you could use LGPL instead.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/253925", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13154/" ] }
254,074
I think I understand the goal of an AST, and I've built a couple of tree structures before, but never an AST. I'm mostly confused because the nodes are text and not number, so I can't think of a nice way to input a token/string as I'm parsing some code. For example, when I looked at diagrams of AST's, the variable and its value were leaf nodes to an equal sign. This makes perfect sense to me, but how would I go about implementing this? I guess I can do it case by case, so that when I stumble upon an "=" I use that as a node, and add the value parsed before the "=" as the leaf. It just seems wrong, because I'd probably have to make cases for tons and tons of things, depending on the syntax. And then I came upon another problem, how is the tree traversed? Do I go all the way down the height, and go back up a node when I hit the bottom, and do the same for it's neighbor? I've seen tons of diagrams on ASTs, but I couldn't find a fairly simple example of one in code, which would probably help.
The short answer is that you use stacks. This is a good example, but I'll apply it to an AST. FYI, this is Edsger Dijkstra's Shunting-Yard Algorithm . In this case, I will use an operator stack and an expression stack. Since numbers are considered expressions in most languages, I'll use the expression stack to store them. class ExprNode: char c ExprNode operand1 ExprNode operand2 ExprNode(char num): c = num operand1 = operand2 = nil Expr(char op, ExprNode e1, ExprNode e2): c = op operand1 = e1 operand2 = e2 # Parser ExprNode parse(string input): char c while (c = input.getNextChar()): if (c == '('): operatorStack.push(c) else if (c.isDigit()): exprStack.push(ExprNode(c)) else if (c.isOperator()): while(operatorStack.top().precedence >= c.precedence): operator = operatorStack.pop() # Careful! The second operand was pushed last. e2 = exprStack.pop() e1 = exprStack.pop() exprStack.push(ExprNode(operator, e1, e2)) operatorStack.push(c) else if (c == ')'): while (operatorStack.top() != '('): operator = operatorStack.pop() # Careful! The second operand was pushed last. e2 = exprStack.pop() e1 = exprStack.pop() exprStack.push(ExprNode(operator, e1, e2)) # Pop the '(' off the operator stack. operatorStack.pop() else: error() return nil # There should only be one item on exprStack. # It's the root node, so we return it. return exprStack.pop() (Please be nice about my code. I know it's not robust; it's just supposed to be pseudocode.) Anyway, as you can see from the code, arbitrary expressions can be operands to other expressions. If you have the following input: 5 * 3 + (4 + 2 % 2 * 8) the code I wrote would produce this AST: + / \ / \ * + / \ / \ 5 3 4 * / \ % 8 / \ 2 2 And then when you want to produce the code for that AST, you do a Post Order Tree Traversal . When you visit a leaf node (with a number), you generate a constant because the compiler needs to know the operand values. When you visit a node with an operator, you generate the appropriate instruction from the operator. For example, the '+' operator gives you an "add" instruction.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254074", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/143547/" ] }
254,106
The task is to configure a piece of hardware within the device, according to some input specification. This should be achieved as follows: 1) Collect the configuration information. This can happen at different times and places. For example, module A and module B can both request (at different times) some resources from my module. Those 'resources' are actually what the configuration is. 2) After it is clear that no more requests are going to be realized, a startup command, giving a summary of the requested resources, needs to be sent to the hardware. 3) Only after that, can (and must) detailed configuration of said resources be done. 4) Also, only after 2), can (and must) routing of selected resources to the declared callers be done. A common cause for bugs, even for me, who wrote the thing, is mistaking this order. What naming conventions, designs or mechanisms can I employ to make the interface usable by someone who sees the code for the first time?
It's a redesign but you can prevent misuse of many APIs but not having available any method that shouldn't be called. For example, instead of first you init, then you start, then you stop Your constructor init s an object that can be started and start creates a session that can be stopped. Of course if you have a restriction to one session at a time you need to handle the case where someone tries to create one with one already active. Now apply that technique to your own case.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254106", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54268/" ] }
254,181
In strongly-typed languages like Java and C#, void (or Void ) as a return type for a method seem to mean: This method doesn't return anything. Nothing. No return. You will not receive anything from this method. What's really strange is that in C, void as a return type or even as a method parameter type means: It could really be anything. You'd have to read the source code to find out. Good luck. If it's a pointer, you should really know what you're doing. Consider the following examples in C: void describe(void *thing) { Object *obj = thing; printf("%s.\n", obj->description); } void *move(void *location, Direction direction) { void *next = NULL; // logic! return next; } Obviously, the second method returns a pointer, which by definition could be anything. Since C is older than Java and C#, why did these languages adopt void as meaning "nothing" while C used it as "nothing or anything (when a pointer)"?
The keyword void (not a pointer) means "nothing" in those languages. This is consistent. As you noted, void* means "pointer to anything" in languages that support raw pointers (C and C++). This is an unfortunate decision because as you mentioned, it does make void mean two different things. I have not been able to find the historical reason behind reusing void to mean "nothing" and "anything" in different contexts, however C does this in several other places. For example, static has different purposes in different contexts. There is obviously precedent in the C language for reusing keywords this way, regardless of what one may think of the practice. Java and C# are different enough to make a clean break to correct some of these issues. Java and "safe" C# also do not allow raw pointers and do not need easy C compatibility (Unsafe C# does allow pointers but the vast majority of C# code does not fall into this category). This allows them to change things up a bit without worrying about backwards compatibility. One way of doing this is introducing a class Object at the root of the hierarchy from which all classes inherit, so an Object reference serves the same function of void* without the nastiness of type issues and raw memory management.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254181", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13305/" ] }
254,233
Why do we need to include both the .h and .cpp files while we can make it work solely by including the .cpp file? For example: creating a file.h containing declarations, then creating a file.cpp containing definitions and including both in main.cpp . Alternatively: creating a file.cpp containing declaration/definitions ( no prototypes ) including it in main.cpp . Both work for me. I can't see the difference. Maybe some insight into the compiling and linking process may help.
While you can include .cpp files as you mentioned, this is a bad idea. As you mentioned, declarations belong in header files. These cause no problems when included in multiple compilation units because they do not include implementations. Including a the definition of a function or class member multiple times will normally cause a problem (but not always) because the linker will get confused and throw an error. What should happen is each .cpp file includes definitions for a subset of the program, such as a class, logically organized group of functions, global static variables (use sparingly if at all), etc. Each compilation unit ( .cpp file) then includes whatever declarations it needs to compile the definitions it contains. It keeps track of the functions and classes it references but does not contain, so the linker can resolve them later when it combines the object code into an executable or library. Example Foo.h -> contains declaration (interface) for class Foo. Foo.cpp -> contains definition (implementation) for class Foo. Main.cpp -> contains main method, program entry point. This code instantiates a Foo and uses it. Both Foo.cpp and Main.cpp need to include Foo.h . Foo.cpp needs it because it is defining the code that backs the class interface, so it needs to know what that interface is. Main.cpp needs it because it is creating a Foo and invoking its behavior, so it has to know what that behavior is, the size of a Foo in memory and how to find its functions, etc. but it does not need the actual implementation just yet. The compiler will generate Foo.o from Foo.cpp which contains all of the Foo class code in compiled form. It also generates Main.o which includes the main method and unresolved references to class Foo. Now comes the linker, which combines the two object files Foo.o and Main.o into an executable file. It sees the unresolved Foo references in Main.o but sees that Foo.o contains the necessary symbols, so it "connects the dots" so to speak. A function call in Main.o is now connected to the actual location of the compiled code so at runtime, the program can jump to the correct location. If you had included the Foo.cpp file in Main.cpp , there would be two definitions of class Foo. The linker would see this and say "I don't know which one to pick, so this is an error." The compiling step would succeed, but linking would not. (Unless you just do not compile Foo.cpp but then why is it in a separate .cpp file?) Finally, the idea of different file types is irrelevant to a C/C++ compiler. It compiles "text files" which hopefully contain valid code for the desired language. Sometimes it may be able to tell the language based on the file extension. For example, compile a .c file with no compiler options and it will assume C, while a .cc or .cpp extension would tell it to assume C++. However, I can easily tell a compiler to compile a .h or even .docx file as C++, and it will emit an object ( .o ) file if it contains valid C++ code in plain text format. These extensions are more for the benefit of the programmer. If I see Foo.h and Foo.cpp , I immediately assume that the first contains the declaration of the class and the second contains the definition.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254233", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/147072/" ] }
254,239
One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.
While you can include .cpp files as you mentioned, this is a bad idea. As you mentioned, declarations belong in header files. These cause no problems when included in multiple compilation units because they do not include implementations. Including a the definition of a function or class member multiple times will normally cause a problem (but not always) because the linker will get confused and throw an error. What should happen is each .cpp file includes definitions for a subset of the program, such as a class, logically organized group of functions, global static variables (use sparingly if at all), etc. Each compilation unit ( .cpp file) then includes whatever declarations it needs to compile the definitions it contains. It keeps track of the functions and classes it references but does not contain, so the linker can resolve them later when it combines the object code into an executable or library. Example Foo.h -> contains declaration (interface) for class Foo. Foo.cpp -> contains definition (implementation) for class Foo. Main.cpp -> contains main method, program entry point. This code instantiates a Foo and uses it. Both Foo.cpp and Main.cpp need to include Foo.h . Foo.cpp needs it because it is defining the code that backs the class interface, so it needs to know what that interface is. Main.cpp needs it because it is creating a Foo and invoking its behavior, so it has to know what that behavior is, the size of a Foo in memory and how to find its functions, etc. but it does not need the actual implementation just yet. The compiler will generate Foo.o from Foo.cpp which contains all of the Foo class code in compiled form. It also generates Main.o which includes the main method and unresolved references to class Foo. Now comes the linker, which combines the two object files Foo.o and Main.o into an executable file. It sees the unresolved Foo references in Main.o but sees that Foo.o contains the necessary symbols, so it "connects the dots" so to speak. A function call in Main.o is now connected to the actual location of the compiled code so at runtime, the program can jump to the correct location. If you had included the Foo.cpp file in Main.cpp , there would be two definitions of class Foo. The linker would see this and say "I don't know which one to pick, so this is an error." The compiling step would succeed, but linking would not. (Unless you just do not compile Foo.cpp but then why is it in a separate .cpp file?) Finally, the idea of different file types is irrelevant to a C/C++ compiler. It compiles "text files" which hopefully contain valid code for the desired language. Sometimes it may be able to tell the language based on the file extension. For example, compile a .c file with no compiler options and it will assume C, while a .cc or .cpp extension would tell it to assume C++. However, I can easily tell a compiler to compile a .h or even .docx file as C++, and it will emit an object ( .o ) file if it contains valid C++ code in plain text format. These extensions are more for the benefit of the programmer. If I see Foo.h and Foo.cpp , I immediately assume that the first contains the declaration of the class and the second contains the definition.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254239", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1343/" ] }
254,304
I have seen that in imperative paradigms f(x)+f(x) might not be the same as: 2*f(x) But in a functional paradigm it should be the same. I have tried to implement both cases in Python and Scheme , but for me they look pretty straightforward the same. What would be an example that could point out the difference with the given function?
Referential transparency, referred to a function, indicates that you can determine the result of applying that function only by looking at the values of its arguments. You can write referentially transparent functions in any programming language, e.g. Python, Scheme, Pascal, C. On the other hand, in most languages you can also write non referentially transparent functions. For example, this Python function: counter = 0 def foo(x): global counter counter += 1 return x + counter is not referentially transparent, in fact calling foo(x) + foo(x) and 2 * foo(x) will produce different values, for any argument x . The reason for this is that the function uses and modifies a global variable, therefore the result of each invocation depends on this changing state, and not only on the function's argument. Haskell, a purely functional language, strictly separates expression evaluation in which pure functions are applied and which is always referentially transparent, from action execution (processing of special values), which is not referentially transparent, i.e. executing the same action can have each time a different result. So, for any Haskell function f :: Int -> Int and any integer x , it is always true that 2 * (f x) == (f x) + (f x) An example of an action is the result of the library function getLine : getLine :: IO String As a result of expression evaluation, this function (actually a constant) first of all produces a pure value of type IO String . Values of this type are values like any other: you can pass them around, put them in data structures, compose them using special functions, and so on. For example you can make a list of actions like so: [getLine, getLine] :: [IO String] Actions are special in that you can tell the Haskell runtime to execute them by writing: main = <some action> In this case, when your Haskell program is started, the runtime walks through the action bound to main and executes it, possibly producing side-effects. Therefore, action execution is not referentially transparent because executing the same action two times can produce different results depending on what the runtime gets as input. Thanks to Haskell's type system, an action can never be used in a context where another type is expected, and vice versa. So, if you want to find the length of a string you can use the length function: length "Hello" will return 5. But if you want to find the length of a string read from the terminal, you cannot write length (getLine) because you get a type error: length expects an input of type list (and a String is, indeed, a list) but getLine is a value of type IO String (an action). In this way, the type system ensures that an action value like getLine (whose execution is carried out outside the core language and which may be non-referentially transparent) cannot be hidden inside a non-action value of type Int . EDIT To answer exizt question, here is a small Haskell program that reads a line from the console and prints its length. main :: IO () -- The main program is an action of type IO () main = do line <- getLine putStrLn (show (length line)) The main action consists of two subactions that are executed sequentially: getline of type IO String , the second is constructed by evaluating the function putStrLn of type String -> IO () on its argument. More precisely, the second action is built by binding line to the value read by the first action, evaluating the pure functions length (compute length as an integer) and then show (turn the integer into a string), building the action by applying function putStrLn to the result of show . At this point, the second action can be executed. If you have typed "Hello", it will print "5". Note that if you get a value out of an action using the <- notation, you can only use that value inside another action, e.g. you cannot write: main = do line <- getLine show (length line) -- Error: -- Expected type: IO () -- Actual type: String because show (length line) has type String whereas the do notation requires that an action ( getLine of type IO String ) be followed by another action (e.g. putStrLn (show (length line)) of type IO () ). EDIT 2 Jörg W Mittag's definition of referential transparency is more general than mine (I have upvoted his answer). I have used a restricted definition because the example in the question focuses on the return value of functions and I wanted to illustrate this aspect. However, RT in general refers to the meaning of the whole program, including changes to global state and interactions with the environment (IO) caused by evaluating an expression. So, for a correct, general definition, you should refer to that answer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254304", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/78538/" ] }
254,438
I'm working on an application that plays music. During playback, often things need to happen on separate threads because they need to happen simultaneously. For example, the notes of a chord need to be heard together, so each one is assigned its own thread to be played in. (Edit to clarify: calling note.play() freezes the thread until the note is done playing, and this is why I need three separate threads to have three notes heard at the same time.) This kind of behavior creates many threads during playback of a piece of music. For example, consider a piece of music with a short melody and short accompanying chord progression. The entire melody can be played on a single thread, but the progression needs three threads to play, since each of its chords contains three notes. So the pseudo-code for playing a progression looks like this: void playProgression(Progression prog){ for(Chord chord : prog) for(Note note : chord) runOnNewThread( func(){ note.play(); } ); } So assuming the progression has 4 chords, and we play it twice, than we are opening 3 notes * 4 chords * 2 times = 24 threads. And this is just for playing it once. Actually, it works fine in practice. I don't notice any noticeable latency, or bugs resulting from this. But I wanted to ask if this is correct practice, or if I'm doing something fundamentally wrong. Is it reasonable to create so many threads each time the user pushes a button? If not, how can I do it differently?
One assumption you are making might not be valid: you require (among other things) that your threads execute simultaneously. Might work for 3, but at some point the system is going to need to prioritize which threads to run first, and which one wait. Your implementation will ultimately depend on your API, but most modern APIs will let you tell in advance what you want to play and take care of the timing and queuing themselves. If you were to code such an API yourself, ignoring any existing system API (why would you?!), an event queue mixing your notes and playing them from a single thread looks like a better approach than a thread per note model.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254438", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
254,475
This is a rather conceptual question, but I was hoping I could get some good advice on this. A lot of the programming I do is with ( NumPy ) arrays; I often have to match items in two or more arrays that are of different sizes and the first thing I go to is a for-loop or even worse, a nested for-loop. I want to avoid for-loops as much as possible, because they are slow (at least in Python). I know that for a lot of things with NumPy there are pre-defined commands that I just need to research, but do you (as more experienced programmers) have a general thought process that comes to mind when you have to iterate something? So I often have something like this, which is awful and I want to avoid it: small_array = np.array(["one", "two"]) big_array = np.array(["one", "two", "three", "one"]) for i in range(len(small_array)): for p in range(len(big_array)): if small_array[i] == big_array[p]: print "This item is matched: ", small_array[i] I know there are multiple different ways to achieve this in particular, but I am interested in a general method of thinking, if it exists.
This is a common conceptual difficulty when learning to use NumPy effectively. Normally, data processing in Python is best expressed in terms of iterators , to keep memory usage low, to maximize opportunities for parallelism with the I/O system, and to provide for reuse and combination of parts of algorithms. But NumPy turns all that inside out: the best approach is to express the algorithm as a sequence of whole-array operations , to minimize the amount of time spent in the slow Python interpreter and maximize the amount of time spent in fast compiled NumPy routines. Here's the general approach I take: Keep the original version of the function (which you are confident is correct) so that you can test it against your improved versions both for correctness and speed. Work from the inside out: that is, start with the innermost loop and see if can be vectorized; then when you've done that, move out one level and continue. Spend lots of time reading the NumPy documentation . There are a lot of functions and operations in there and they are not always brilliantly named, so it's worth getting to know them. In particular, if you find yourself thinking, "if only there were a function that did such-and-such," then it's well worth spending ten minutes looking for it. It's usually in there somewhere. There's no substitute for practice, so I'm going to give you some example problems. The goal for each problem is to rewrite the function so that it is fully vectorized : that is, so that it consists of a sequence of NumPy operations on whole arrays, with no native Python loops (no for or while statements, no iterators or comprehensions). Problem 1 def sumproducts(x, y): """Return the sum of x[i] * y[j] for all pairs of indices i, j. >>> sumproducts(np.arange(3000), np.arange(3000)) 20236502250000 """ result = 0 for i in range(len(x)): for j in range(len(y)): result += x[i] * y[j] return result Problem 2 def countlower(x, y): """Return the number of pairs i, j such that x[i] < y[j]. >>> countlower(np.arange(0, 200, 2), np.arange(40, 140)) 4500 """ result = 0 for i in range(len(x)): for j in range(len(y)): if x[i] < y[j]: result += 1 return result Problem 3 def cleanup(x, missing=-1, value=0): """Return an array that's the same as x, except that where x == missing, it has value instead. >>> cleanup(np.arange(-3, 3), value=10) ... # doctest: +NORMALIZE_WHITESPACE array([-3, -2, 10, 0, 1, 2]) """ result = [] for i in range(len(x)): if x[i] == missing: result.append(value) else: result.append(x[i]) return np.array(result) Spoilers below. You'll get much the best results if you have a go yourself before looking at my solutions! Answer 1 np.sum(x) * np.sum(y) Answer 2 np.sum(np.searchsorted(np.sort(x), y)) Answer 3 np.where(x == missing, value, x)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254475", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/147328/" ] }
254,480
Is there any issue with creating a new controller within the ascx view pages. I don't want to create a model, are there any issues with doing it this way. Normally, the controllers control variables within the model, and the model is the middle point between them, but I am creating a new instance of a controller class within the ascx page itself. Is this ok? **EDIT I want to know the implications of putting an instance of a controller class directly into the asp.net ascx code on a view page Thanks
This is a common conceptual difficulty when learning to use NumPy effectively. Normally, data processing in Python is best expressed in terms of iterators , to keep memory usage low, to maximize opportunities for parallelism with the I/O system, and to provide for reuse and combination of parts of algorithms. But NumPy turns all that inside out: the best approach is to express the algorithm as a sequence of whole-array operations , to minimize the amount of time spent in the slow Python interpreter and maximize the amount of time spent in fast compiled NumPy routines. Here's the general approach I take: Keep the original version of the function (which you are confident is correct) so that you can test it against your improved versions both for correctness and speed. Work from the inside out: that is, start with the innermost loop and see if can be vectorized; then when you've done that, move out one level and continue. Spend lots of time reading the NumPy documentation . There are a lot of functions and operations in there and they are not always brilliantly named, so it's worth getting to know them. In particular, if you find yourself thinking, "if only there were a function that did such-and-such," then it's well worth spending ten minutes looking for it. It's usually in there somewhere. There's no substitute for practice, so I'm going to give you some example problems. The goal for each problem is to rewrite the function so that it is fully vectorized : that is, so that it consists of a sequence of NumPy operations on whole arrays, with no native Python loops (no for or while statements, no iterators or comprehensions). Problem 1 def sumproducts(x, y): """Return the sum of x[i] * y[j] for all pairs of indices i, j. >>> sumproducts(np.arange(3000), np.arange(3000)) 20236502250000 """ result = 0 for i in range(len(x)): for j in range(len(y)): result += x[i] * y[j] return result Problem 2 def countlower(x, y): """Return the number of pairs i, j such that x[i] < y[j]. >>> countlower(np.arange(0, 200, 2), np.arange(40, 140)) 4500 """ result = 0 for i in range(len(x)): for j in range(len(y)): if x[i] < y[j]: result += 1 return result Problem 3 def cleanup(x, missing=-1, value=0): """Return an array that's the same as x, except that where x == missing, it has value instead. >>> cleanup(np.arange(-3, 3), value=10) ... # doctest: +NORMALIZE_WHITESPACE array([-3, -2, 10, 0, 1, 2]) """ result = [] for i in range(len(x)): if x[i] == missing: result.append(value) else: result.append(x[i]) return np.array(result) Spoilers below. You'll get much the best results if you have a go yourself before looking at my solutions! Answer 1 np.sum(x) * np.sum(y) Answer 2 np.sum(np.searchsorted(np.sort(x), y)) Answer 3 np.where(x == missing, value, x)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254480", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/147335/" ] }
254,558
In almost all circumstances, primary keys are not a part of your business domain. Sure, you may have some important user-facing objects with unique indices ( UserName for users or OrderNumber for orders) but in most cases, there is no business need to overtly identify domain objects by a single value or set of values, to anyone but perhaps an administrative user. Even in those exceptional cases, especially if you are using global unique identifiers (GUID) , you will like or want to employ an alternate key rather than expose the primary key itself. So, if my understanding of domain-driven design is accurate, primary keys need not and thus should not be exposed, and good riddance. They're ugly and cramp my style. But if we choose not to include primary keys in the domain model, there are consequences: Naively, data transfer objects (DTO) that derive exclusively from combinations of domain models will not have primary keys Incoming DTO's will not have a primary key So, is it safe to say that if you are really going to stay pure and eliminate primary keys in your domain model, you should be prepared to be able to handle every request in terms of unique indices on that primary key? Put in another way, which of the following solutions is the correct approach to dealing with identifying particular objects after removing PK in domain models? Being able to identify the objects you need to deal with by other attributes Getting the primary key back in the DTO; ie, eliminating the PK when mapping from persistence to domain, then recombining the PK when mapping from domain to DTO? EDIT: Let's make this concrete. Say my domain model is VoIPProvider which includes fields like Name , Description , URL , as well as references like ProviderType , PhysicalAddress , and Transactions . Now let's say I want to build a web service that will allow privileged users to manage VoIPProvider s. Perhaps a user-friendly ID is useless in this case; after all, VoIP providers are companies whose names tend to be distinct in the computer sense and even distinct enough in the human sense for business reasons. So it may be enough to say that a unique VoIPProvider is completely determined by (Name, URL) . So now let's say I need a method PUT api/providers/voip so that privileged users can update VoIP providers. They send up a VoIPProviderDTO , which includes many but not all of the fields from the VoIPProvider , including some flattening potentially. However, I can't read their minds, and they still need to tell me which provider we are talking about. It seems I have 2 (maybe 3) options: Include a primary key or alternate key in my domain model and send it to the DTO, and vice versa Identify the provider we care about via the unique index, like (Name, Url) Introduce some sort of intermediate object that can always map between persistence layer, domain, and DTO in a way that does not expose implementation details about the persistence layer - say by introducing an in-memory temporary identifier when going from domain to DTO and back,
This is the way how we solve this (since more than 15 years, when even the term "domain driven design" was not invented): when mapping the domain model to a database implementation or a class model in a specific programming language, you have a simple, consistent rule like "for each domain object mapped to a relational table, the primary key is "TablenameID". this primary key is completely artificial, it has always the same type, and no business meaning - just a surrogate key the "graphical version" of your domain model (the one you use to talk to your domain experts) does not contain primary keys. You don't expose them directly to the experts (but you expose them to anyone who is actually implementing code for the system). So whenever you need a primary key for technical purposes (like mapping relations to a database), you have one available, but as long as you don't want to "see it", change your level of abstraction to the "domain experts model". And you don't have to maintain "two models" (one with PKs and one without); instead, maintain only a model without PKs and use a code generator to create the DDL for your DB, which adds the PK automatically according to the mapping rules. Note that this does not forbid to add any "business keys" like an additional "OrderNumber", besides the surrogate OrderID . Technically these business keys become alternate keys when mapping to your database. Just avoid using these for creating references to other tables, always prefer using the surrogate keys if possible, this will make things a hell lot easier. To your comment: using a surrogate key for identifying records is no business-related operation, it is a purely technical operation. To make this clear, look at your example: as long as you don't define additional unique-contraints, it would be possible to have two VoIPProvider objects with the same combination of (name,url), but different VoIPProviderIDs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254558", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51007/" ] }
254,576
Consider the following class: class Person: def __init__(self, name, age): self.name = name self.age = age My coworkers tend to define it like this: class Person: name = None age = None def __init__(self, name, age): self.name = name self.age = age The main reason for this is that their IDE of choice shows the properties for autocompletion. Personally, I dislike the latter one, because it makes no sense that a class has those properties set to None . Which one would be better practice and for what reasons?
I call the latter bad practice under the "this does not do what you think it does" rule. Your coworker's position can be rewritten as: "I am going to create a bunch of class-static quasi-global variables which are never accessed, but which do take up space in the various class's namespace tables ( __dict__ ), just to make my IDE do something."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254576", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/146166/" ] }
254,688
Sometimes compilers inline function calls. That means that they move the code of the called function into the calling function. This makes things slightly faster because there's no need to push and pop stuff on and off the call stack. So my question is, why don't compilers inline everything? I assume it would make the executable notably faster. The only reason I can think of is a significantly larger executable, but does it really matter these days with hundreds of GB of memory? Isn't the improved performance worth it? Is there any other reason why compilers don't just inline all function calls?
First note that one major effect of inline is that it allows further optimizations to be made at the call site. For your question: there are things which are difficult or even impossible to inline: dynamically linked libraries dynamically determined functions (dynamic dispatch, called through function pointers) recursive functions (tail recursion can) functions for which you don't have the code (but link time optimization allow this for some of them) Then inlining has not only beneficial effects: bigger executable means more disk place and bigger load time bigger executable means increase of cache pressure (note that inlining small enough functions such as simple getters may decrease the executable size and the cache pressure) And finally, for functions which takes a non trivial time to execute, the gain just doesn't worth the pain.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254688", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
254,799
In 1989 Felix Lee, John Hayes and Angela Thomas wrote a Hacker's test taking the form of a quiz with many insider jokes, as “ Do you eat slime-molds? ” I am considering the following series: 0015 Ever change the value of 4? 0016 ... Unintentionally? 0017 ... In a language other than Fortran? Is there a particular anecdote making the number “4” particular in the series? Did some Fortran implementation allow to modify the value of constants? Was this possible in other languages in common use at that time?
In the old days (1970s and before) some computers did not have any MMU (and this is true today for very cheap microcontrollers). On such systems, there is no memory protection so no read-only segment in the address space , and a buggy program could overwrite a constant (either in data memory, or even inside the machine code). The Fortran compilers at that time passed formal arguments by reference . So if you did CALL FUN(4) and the SUBROUTINE FUN(I) has its body changing I - e.g. with an statement I = I + 1 in its body, you could have a disaster, changing 4 into 5 in the caller (or worse). This was also true on the first microcomputers like the original IBM PC AT from 1984, with MS-DOS FWIW, I'm old enough to have used, as a teen ager in early 1970s, such computers: IBM1620 and CAB500 (in a museum: these are 1960s era computers!). The IBM1620 was quite fun: it used in memory tables for additions and multiplications (and if you overwrote these tables, chaos ensued). So not only you could overwrite a 4, but you could even overwrite every future 2+2 addition or 7*8 multiplications (but I really forgot these dirty details so could be wrong). Today, you might overwrite the BIOS code in flash memory, if you are persevering enough. Sadly, I don't feel that fun any more, so I never tried. (I'm even afraid of installing some LinuxBios on my motherboard). On current computers and operating systems passing a constant by reference and changing it inside the callee will just provoke a segmentation violation , which sounds familiar to many C or C++ developers. BTW: to be nitpicking: overwriting 4 is not a matter of language, but of implementation.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254799", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/98803/" ] }
254,978
A lot of people claim that "comments should explain 'why', but not 'how'". Others say that "code should be self-documenting" and comments should be scarce. Robert C. Martin claims that (rephrased to my own words) often "comments are apologies for badly written code". My question is the following: What's wrong with explaining a complex algorithm or a long and convoluted piece of code with a descriptive comment? This way, instead of other developers (including yourself) having to read the entire algorithm line by line to figure out what it does, they can just read the friendly descriptive comment you wrote in plain English. English is 'designed' to be easily understood by humans. Java, Ruby or Perl, however, have been designed to balance human-readability and computer-readability, thus compromising the human-readability of the text. A human can understand a piece of English much faster that he/she can understand a piece of code with the same meaning (as long as the operation isn't trivial). So after writing a complex piece of code written in a partly human-readable programming language, why not add a descriptive and concise comment explaining the operation of the code in friendly and understandable English? Some will say "code shouldn't be hard to understand", "make functions small", "use descriptive names", "don't write spaghetti code". But we all know that's not enough. These are mere guidelines - important and useful ones - but they do not change the fact that some algorithms are complex. And therefore are hard to understand when reading them line by line. Is it really that bad to explain a complex algorithm with a few lines of comments about it's general operation? What's wrong with explaining complicated code with a comment?
In layman's terms: There's nothing wrong with comments per se. What's wrong is writing code that needs those kind of comments, or assuming that it's OK to write convoluted code as long as you explain it friendly in plain English. Comments don't update themselves automatically when you change the code. That's why often times comments are not in sync with code. Comments don't make code easier to test. Apologizing is not bad. What you did that requires apologizing for (writing code that isn't easily understandable) is bad. A programmer that is capable of writing simple code to solve a complex problem is better than one that writes complex code and then writes a long comment explaining what his code does. Bottom line: Explaining yourself is good, not needing to do so is better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254978", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
254,984
C and C++ compiles adjacent string literals as a single string literal. For example this: "Some text..." "and more text" is equivalent to: "Some text...and more text" In other C-family languages like C# or Java, this is a syntax error (which is perfectly fine BTW). What is the rationale/historical reason for C and C++ to do this?
The original C language was designed in 1969-1972 when computing was still dominated by the 80 column punched card. Its designers used 80 column devices such as the ASR-33 Teletype. These devices did not automatically wrap text, so there was a real incentive to keep source code within 80 columns. Fortran and Cobol had explicit continuation mechanisms to do so, before they finally moved to free format. It was a stroke of brilliance for Dennis Ritchie (I assume) to realise that there was no ambiguity in the grammar and that long ASCII strings could be made to fit into 80 columns by the simple expedient of getting the compiler to concatenate adjacent literal strings. Countless C programmers were grateful for that small feature. Once the feature is in, why would it ever be removed? It causes no grief and is frequently handy. I for one wish more languages had it. The modern trend is to have extended strings with triple quotes or other symbols, but the simplicity of this feature in C has never been outdone.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/254984", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139479/" ] }
255,067
Occasionally while typing something up that relates to a case-sensitive programming language I end up starting a sentence with a function name. Now the rules of English state that the first word in a sentence needs to be capitalized; the function name is lowercase, though. If you are wondering what could I be saying that would result in the first word being a function name, take this example: Your fread implementation is broken. fread needs to return how many bytes were read. I understand that I could change the second instance of fread to It but I want to know the best way of handling this other than just rewriting the sentence. Should I capitalize the function name? The only way I would like to hear "rewrite the sentence" as an answer is if starting the sentence with a function name violates some English rule that I am not aware of. Edit: I really thank everyone for these answers. They have changed and improved my insight into the issue. I have learned quite a bit from this. I am very surprised that I did not think of these simple but good solutions. I do think my stance on alternating the sentence was too tough and now I realize due to these good answers that overall altering the sentence appears to be the best option for dealing with these cases be it adding parenthesis after the function or saying The function before the function name and if available using formatting for the function name.
In typography this is generally handled by using a different rendering, whether or not it's the start of a sentence, to indicate that what's hitting the eye is not just a word in the sentence but a special entity. Your fread implementation is broken. fread needs to return how many bytes were read. Depending on how formal a document is, it can adopt the same approach. In any case doing so eliminates the issue you identify, so you may wish to use it for that reason alone. In the plain text world (as noted by several in comments and other answers), appending parentheses to function names helps a bit, but since we also need to refer to entities that don't take parentheses, this has only limited value. In general, short of adopting a convention of surrounding the text with special characters like brackets and asterisks, in the plain text world there's little option but to sidestep the issue by restructuring the sentence.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255067", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139719/" ] }
255,165
I've recently been looking at how me and my team uses Git and how our workflows work. We currently use a feature-branch workflow which seems to work well. I've also seen some individuals on our team use workflow based on git stash . The workflow goes something like this: Work on a main branch (like master ) Make commits as you go If you need to get changes or switch branches, push your uncommitted changes onto the stash Once your updating is done, pop the changes off the stash. I should mention that this workflow is used instead of a feature branch workflow. Instead of taking a branch and working on it, developers here only ever work on a single branch and push/pop off the stack as they see fit. I actually don't think this is a great workflow, and branching would be more appropriate than using git stash in this way. I can see the value of git stash as an emergency operation, but not for using it in a daily, regular workflow. Would using git stash regularly be considered an anti-pattern? If so, what are some specific problems that could arise? If not, what are the benefits?
From the Git SCM Book : Often, when you’ve been working on part of your project, things are in a messy state and you want to switch branches for a bit to work on something else. The problem is, you don’t want to do a commit of half-done work just so you can get back to this point later. The answer to this issue is the git stash command. Stashing takes the dirty state of your working directory — that is, your modified tracked files and staged changes — and saves it on a stack of unfinished changes that you can reapply at any time. Given this description, I would say this is an Anti Pattern. An overly simplified explanation of Git Stash would be that it is the "Cut and Paste" of source control. You take a bunch of changed files, "stash" them away in a holding pen outside of Git's normal branching workflow, and then reapply those changes to a different branch at a later date. Going back a little further, committing to master is the anti pattern here. Use branches. That's what they were designed for. It really boils down to this: You can hammer a screw into the wall and it will hold up a picture, but using a screwdriver is what you should do. Don't use a hammer when the screwdriver is sitting right beside you. About Committing "Broken" Code While the following is opinion, I have come to this opinion from experience. Commit early, and commit often. Commit as much broken code as you want. View your local commit history as "save points" while you hack away at something. Once you've done a logical piece of work, make a commit. Sure it might break everything, but that doesn't matter as long as you don't push those commits. Before pushing, rebase and squash your commits. Create new branch Hack hack hack Commit broken code Polish the code and make it work Commit working code Rebase and Squash Test Push when tests are passing For the OP, this Linux kernal message thread might be of interest, because it kind of sounds like some members of the OP's team is using Git in a similar manner. @RibaldEddie said in a comment below: First of all, a stash is not outside of a "branching workflow" since under the hood a stash is just another branch. (at the risk of incurring the wrath of many people) Linus said: With "git stash", you can have multiple different stashed things too, but they don't queue up on each other - they are just random independent patches that you've stashed away because they were inconvenient at some point. What I think @RibaldEddie is trying to say is that you can use git stash in a feature branch workflow -- and this is true. It's not the use of git stash that is the problem. It is the combination of committing to master and using git stash . This is an anti pattern. Clarifying git rebase From @RibaldEddie's comment: Rebasing is much more like copy-pasting and even worse modifies committed history. (Emphasis mine) Modifying commit history is not a bad thing, as long as it is local commit history . If you rebase commits that you've already pushed, you'll essentially orphan anyone else using your branch. This is bad. Now, say you've made several commits during the course of a day. Some commits were good. Some... not so good. The git rebase command in conjunction with squashing your commits is a good way to clean up your local commit history. It's nice to merge in one commit to public branches because it keeps the commit history of your team's shared branches clean. After rebasing, you'll want to test again, but if tests pass then you can push one clean commit instead of several dirty ones. There is another interesting Linux Kernel thread on clean commit history . Again, from Linus: I want clean history, but that really means (a) clean and (b) history. People can (and probably should) rebase their private trees (their own work). That's a cleanup . But never other peoples code. That's a "destroy history" So the history part is fairly easy. There's only one major rule, and one minor clarification: You must never EVER destroy other peoples history. You must not rebase commits other people did. Basically, if it doesn't have your sign-off on it, it's off limits: you can't rebase it, because it's not yours. Notice that this really is about other peoples history , not about other peoples code . If they sent stuff to you as an emailed patch, and you applied it with "git am -s", then it's their code, but it's your history. So you can go wild on the "git rebase" thing on it, even though you didn't write the code, as long as the commit itself is your private one. Minor clarification to the rule: once you've published your history in some public site, other people may be using it, and so now it's clearly not your private history any more. So the minor clarification really is that it's not just about "your commit", it's also about it being private to your tree, and you haven't pushed it out and announced it yet. ... Now the "clean" part is a bit more subtle, although the first rules are pretty obvious and easy: Keep your own history readable Some people do this by just working things out in their head first, and not making mistakes. but that's very rare, and for the rest of us, we use "git rebase" etc while we work on our problems. So "git rebase" is not wrong. But it's right only if it's YOUR VERY OWN PRIVATE git tree. Don't expose your crap. This means: if you're still in the "git rebase" phase, you don't push it out. If it's not ready, you send patches around, or use private git trees (just as a "patch series replacement") that you don't tell the public at large about. (emphasis mine) Conclusion In the end, the OP has some developers doing this: git checkout master (edit files) git commit -am "..." (edit files) git stash git pull git stash (pop|apply) There are two problems here: Developers are committing to master. Lock this down immediately. Really, this is the biggest problem. Developers are constantly using git stash and git pull on master when they should be using feature branches. There is nothing wrong with using git stash -- especially before a pull -- but using git stash in this manner is an anti pattern when there are better workflows in Git. Their use of git stash a red herring. It is not the problem. Committing to master is the problem.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255165", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36853/" ] }
255,180
I saw Martin Odersky's "The Trouble with Types" presenstaion. He divided programing-languages in two dimensions in the "Type Systems Landscape" chart; A "Static/Dynamic" dimension and A "Strong/Weak" one. I searched to find a definition for "strong/weak type systems" and its difference from "static/dynamic" type-systems that can explain why Python/Ruby have a "strong type system"? I've found something on Wikipedia, but it hasn't satisfied me. Now I want to know what's the "strong/weak"? and How Python has a strong type system (like as scala)? Is it about inheritance / composable / primitive types in a language or not? If 'no', What's the the concept and measurement-unit for these features (inheritance / compose / primitive / container / ...) in the type-systems?
A strong type system is a type system that has a compile-time restriction or run-time feature that you find attractive. A weak type system is a type system which lacks that restriction or feature. Seriously, that's it. You read the Wikipedia page, so you know that there are at least eleven different mutually incompatible meanings of "strongly typed". The term is useless unless clarified; no two people discussing a type system need have the same definition in their heads of "strongly typed". The sensible thing to do is therefore to avoid using "strong" and "weak" entirely; when you talk about a feature of a type system, simply describe the feature rather than characterizing it as "strong" or "weak". If you mean "I like languages where the compiler assigns static types to expressions and searches for type errors" then say that. If you mean "I like languages where every object has one or more associated data types and can describe those types reflectively at runtime" then say that . Don't say "strong" because someone will think you mean "statically typed" and someone will think you mean "has run-time reflection", and those two things have pretty much nothing to do with each other. Keep in mind also that many people use "strong" to mean that a particular type system restriction is impossible to ignore, and some use "strong" to mean the opposite: that a restriction is encouraged but can be avoided. For example, someone who believes that a type system is "strong" if type errors are impossible at runtime would characterize C# as "weakly" typed, because C# allows the developer to insert type conversions that will fail at runtime. (example: short x = (short)(object)(123.ToString()); ) Someone who believes that a type system is "strong" if it encourages writing programs where the compiler finds many but not all type errors at compile time would characterize C# as strongly typed for the same reason . If the same feature of the type system can be reasonably characterized as both "strong" and "weak" then we know that the terms are useless and should be avoided.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255180", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/148030/" ] }
255,227
Suppose I use this ActionScript API released under MIT license to build a software: http://www.cove.org/ape/docs/api/ Can I sell that software? Do I need to give the source code of my software away? Is anyone receiving my software permitted to resell the software?
You can sell the software. No, you are not compelled to provide source code. Anyone who receives source code may do as the license permits. This does not extend to binary distributions. Read the MIT license. Read the whole thing and understand it. It was meant to be read by ordinary people, unlike other licenses that are very complex: Copyright (c) year copyright holders Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255227", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/168044/" ] }
255,275
At one of my employers, we worked on a REST (but it also applies to SOAP) API. The client, which is the application UI, would make calls over the web (LAN in typical production deployments) to the API. The API would make calls to the database. One theme that recurs in our discussions is performance: some people on the team believe that you should not have multiple database calls (usually reads) from a single API call because of performance; you should optimize them so that each API call has only (exactly) one database call. But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory and execute reads very, very quickly (eg. SQL Server loads and keeps everything in RAM and consumes almost all your free RAM if it can). TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why? To be clear, I'm talking about order of magnitude -- I know that it depends on specifics (machine hardware, choice of API and DB, etc.) If I have a call that takes O(milliseconds), does optimizing for DB calls that take an order of magnitude less, actually matter? Or is there more to the problem than this? Edit: for posterity, I think it's quite ridiculous to make claims that we need to improve performance by combining database calls under these circumstances -- especially with a lack of profiling. However, it's not my decision whether we do this or not; I want to know what the rationale is behind thinking this is a correct way of optimizing web API calls.
But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory and execute reads very, very quickly (eg. SQL Server loads and keeps everything in RAM and consumes almost all your free RAM if it can). The Logic In theory, you are correct. However, there are a few flaws with this rationale: From what you stated, it's unclear if you actually tested / profiled your app. In other words, do you actually know that the network transfers from the app to the API are the slowest component? Because that is intuitive, it is easy to assume that it is. However, when discussing performance, you should never assume. At my employer, I am the performance lead. When I first joined, people kept talking about CDN's, replication, etc. based on intuition about what the bottlenecks must be. Turns out, our biggest performance problems were poorly performing database queries. You are saying that because databases are good at retrieving data, that the database is necessarily running at peak performance, is being used optimally, and there is nothing that can be done to improve it. In other words, databases are designed to be fast, so I should never have to worry about it. Another dangerous line of thinking. That's like saying a car is meant to move quickly, so I don't need to change the oil. This way of thinking assumes a single process at a time, or put another way, no concurrency. It assumes that one request cannot influence another request's performance. Resources are shared, such as disk I/O, network bandwidth, connection pools, memory, CPU cycles, etc. Therefore, reducing one database call's use of a shared resource can prevent it from causing other requests to slow down. When I first joined my current employer, management believed that tuning a 3 second database query was a waste of time. 3 seconds is so little, why waste time on it? Wouldn't we be better off with a CDN or compression or something else? But if I can make a 3 second query run in 1 second, say by adding an index, that is 2/3 less blocking, 2/3 less time spent occupying a thread, and more importantly, less data read from disk, which means less data flushed out of the in-RAM cache. The Theory There is a common conception that software performance is simply about speed . From a purely speed perspective, you are right. A system is only as fast as its slowest component. If you have profiled your code and found that the Internet is the slowest component, then everything else is obviously not the slowest part. However, given the above, I hope you can see how resource contention, lack of indexing, poorly written code, etc. can create surprising differences in performance. The Assumptions One last thing. You mentioned that a database call should be cheap compared to a network call from the app to the API. But you also mentioned that the app and the API servers are in the same LAN. Therefore, aren't both of them comparable as network calls? In other words, why are you assuming that the API transfer is orders of magnitude slower than the database transfer when they both have the same available bandwidth? Of course the protocols and data structures are different, I get that, but I dispute the assumption that they are orders of magnitude different. Where it gets murkey This whole question is about "multiple" versus "single" database calls. But it's unclear how many are multiple. Because of what I said above, as a general rule of thumb, I recommend making as few database calls as necessary. But that is only a rule of thumb. Here is why: Databases are great at reading data. They are storage engines. However, your business logic lives in your application. If you make a rule that every API call results in exactly one database call, then your business logic may end up in the database. Maybe that is ok. A lot of systems do that. But some don't. It's about flexibility. Sometimes to achieve good decoupling, you want to have 2 database calls separated. For example, perhaps every HTTP request is routed through a generic security filter which validates from the DB that the user has the right access rights. If they do, proceed to execute the appropriate function for that URL. That function may interact with the database. Calling the database in a loop. This is why I asked how many is multiple. In the example above, you would have 2 database calls. 2 is fine. 3 may be fine. N is not fine. If you call the database in a loop, you have now made performance linear, which means it will take longer the more that is in the loop's input. So categorically saying that the API network time is the slowest completely overlooks anomalies like 1% of your traffic taking a long time due to a not-yet-discovered loop that calls the database 10,000 times. Sometimes there are things your app is better at, like some complex calculations. You may need to read some data from the database, do some calculations, then based on the results, pass a parameter to a second database call (maybe to write some results). If you combine those into a single call (like a stored procedure) just for the sake of only calling the database once, you have forced yourself to use the database for something which the app server might be better at. Load balancing: You have 1 database (presumably) and multiple load balanced application servers. Therefore, the more work the app does and the less the database does, the easier it is to scale because it's generally easier to add an app server than setup database replication. Based on the previous bullet point, it may make sense to run a SQL query, then do all the calculations in the application, which is distributed across multiple servers, and then write the results when finished. This could give better throughput (even if the overall transaction time is the same). TL;DR TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why? Yes, but only to a certain extent. You should try to minimize the number of database calls when practical, but don't combine calls which have nothing to do with each other just for the sake of combining them. Also, avoid calling the database in a loop at all costs.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255275", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/19993/" ] }
255,366
This question came up in one of my college classes. The professor only gave the answer that it was more descriptive, but it seems as though <b> and <i> are rather explicit in their meaning and is easier to type than <strong> and <em> . What were the official arguments for the deprecation of these tags?
Last summer, I read the complete HTML5 specification, and every previous HTML specification (even the abandoned ones), and all CSS specs I could find, and a lot of XML specs. Since I love semantically rich hypertext documents, let me give you the idea behind the relevant HTML semantics in HTML5. Before HTML5 Before HTML5, i and b were indeed out of fashion. The reason was that they essentially worked like em and strong , respectively, but with focus on presentation and not on semantics (which is bad). Indeed, i meant that the text should be in italics (it said something about how the text should be rendered on-screen). On the other hand, em meant that the text was to be emphasised (it said something about the semantics of the text). There is an important theoretical difference here. If you use em , the user agent (=browser) knows the text should be emphasised, so it can render it in italics if the document is displayed on-screen (or all-caps if formatting is not possible, or maybe even in boldface is the user prefers that), it can pronounce it differently if the document is spoken to the user, etc. Notice that emphasis really is about semantics. For instance, the phrases The cat is mine. (=not the dog!) The cat is mine . (=not yours!) do not have the same meaning. The same difference applies to b (boldface font) and strong (strong emphasis). A general principle of digital writing in general, and of hypertext authoring in particular, is that you should separate content and style. In hypertext authoring, this means that the content should be in the HTML file, and the style should be in a CSS file (or a number of CSS files). A different but related principle is that the document should be rich in semantics (like marking up headers, footers, lists, emphases, addresses, navigational areas, etc.). This has a number of advantages: It is much easier for computer programs to interpret the document. These programs include browsers, text-to-speech applications, search engines, and digital assistants. (For example, the browser can let you save an address to your address book, if only it can find and interpret it. Also, you might know that Microsoft Word can create and automatically update a TOC for you if you mark up your headings correctly.) It is much easier to change the style later on. (If you want to change the colour of all your third-level headings in your 860-page document, you can change a single line in the stylesheet. If you had mixed content and presentation, you would have to go through the entire document manually. And you would probably miss a heading or two, making the document look unprofessional.) You can use different stylesheets depending on the circumstance (is the document being displayed on-screen or printed on paper?). You can even let the end user choose the style herself. (My website offers a number of alternate stylesheets. In IE and FF, you change these using the View menu.) So, in short, i and b were deprecated because they were HTML tags concerned about presentation , which is totally wrong. In HTML5 In HTML5 i and b are no longer deprecated. Instead, they are given sematic meaning . So they are now actually about semantics, and not about presentation. As before, you use em to mark up emphasis: "The cat is mine." But you use i for almost all other cases where you would use italics in a printed work. For instance: You use i to mark up taxonomic designations: "I like R. norvegicus ." You use i to mark up a phrase in a different language compared to the surrounding text: À la carte You use i to mark up a word when you talk about the word itself: " drink is both a noun and a verb" It is also a good idea to use the class attribute to specify the precise usage (also Google "microformat" and "microdata"). And, of course, in the second case, you should really use the lang attribute to specify the correct language. (Otherwise, for instance , a text-to-speech agent might mispronounce the text.) A year ago or so, the HTML5 specification also said that cite should be used to mark up names of books, films, operas, paintings, etc.: What do you think of Nymphomaniac ? Finally, since long ago, dfn is used to mark up the defining instance of a phrase in a text (like a mathematical definition, or the definition of a term): A group is a set X equipped with a single binary operation * such that... So the italics in your printed book, which can mean a lot of different things, is represented by four different HTML5 tags, which is really great, because semantics is good, as I tried to convince you about earlier. (For instance, you can ask your browser to make a list of all definitions in the text, so you can make sure you know them all before the exam.) Turning to strong and b , the HTML5 specification says that strong should be used to mark up an important part of the text, like a warning or some very important-to-catch word in a sentence. On the other hand, b should be used to mark up things that need to be easy to find in the text, like keywords. I also use b as headings in list items (LIs).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255366", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125618/" ] }
255,404
I understand most of the basic Git/Github concepts by now, however I still have trouble understanding the bigger picture. These are some things that I have managed to get working so far: Push commits Work with branches Integrate Github with Travis CI, a continuous integration system Via Travis CI, automatically build on every commit to master and put the release as a ZIP on Github under releases. However I have only worked on alpha/beta versions of projects so far, so I've never seen versioned releases in practice yet. Thus I want to learn more about versioning, maintaining separate versions, hotfixing the versions, etc. How would I ensure that the following things happen: Have different versions of my project, for example version 1.1.0 and 2.0.0 Have the ability to push hotfixes on the versions, sort of bumping the version to 1.1.1 or 2.0.1, etc. Make a continuous integration system build that version automatically on commit and if succeeds, then publish a release for that specific version. I am doubting between the following options: Do I need to use tags for every version? If so, how can a continuous integration system build releases automatically? Should I create branches for every version? If so, would that not create a whole ton of branches (like an 1.1 and a 2.0 branch, hotfixes go onto that branch of course) How would I specify the version number? Is it okay to have a configuration file that specifies the version number, or are there smarter ways around it? In this case it would be a Java project, if that matters.
You should look at git-flow . It's an excellent (and popular) branching model. Git Flow Summary Branching The main trunks that stay around forever are develop and master . master holds your latest release and develop holds your latest "stable" development copy. Contributors create feature branches (prefixed with feature/ by convention) off of develop : $ git checkout -b feature/my-feature develop and hotfix branches (prefixed with hotfix/ by convention) off of master : # hotfix the latest version of master $ git checkout -b hotfix/hotfix-version-number master # or hotfix from a specific version $ git checkout -b hotfix/hotfix-version-number <starting-tag-name> These branches are "disposable", meaning they have a short lifespan before they are merged back to the main trunks. They are meant to encapsulate small pieces of functionality. Finishing Branches When a contributor is done with a feature branch, they merge it back into develop : $ git checkout develop $ git merge --no-ff feature/my-feature $ git branch -d feature/my-feature When they're done with a hotfix branch, they merge it back into both master and develop so the hotfix carries forward: $ git checkout master $ git merge --no-ff hotfix/hotfix-version-number $ git checkout develop $ git merge --no-ff hotfix/hotfix-version-number $ git branch -d hotfix/hotfix-version-number This is the continuous integration aspect. Releases When you're ready to start packaging up a release, you create a release branch from your "stable" develop branch (same as creating feature branches). You then bump the version number in a tag (described below). Using separate release branches allows you to continue developing new features on develop while you fix bugs and add finishing touches to the release branch. When you're ready to finish the release, you merge the release branch into both master and develop (just like a hotfix ) so that all your changes carry forward. Tagging When you create a release branch or a hotfix branch, you bump the version number appropriately in a tag. With vanilla git, that looks like this: $ git tag -a <tag-name> -m <tag-description> You'll then also have to push the tags (separately) to your remote repository: $ git push --tags It's usually best to use semantic versioning in which your versions take the form major.minor.hotfix . Major bumps are backwards incompatible, whereas minor and hotfix bumps are not backwards incompatible (unless you're in beta, 0.x.x ). Merging As you saw above, git-flow encourages you to merge branches with the following command: $ git merge --no-ff <branch-name> The --no-ff option allows you to maintain all of your branch history without leaving a bunch of branches lying around in the current commit of the repository (so no worries, you won't have a branch for every version). You're also encouraged to pull with $ git pull --rebase So you don't add lots of useless merge commits. You can configure git to do both of these things by default in your .gitconfig . I'll let you look that one up though ;) Browsing versions When someone is looking for a specific version of your codebase, they can checkout the tag by name: # checkout in detached HEAD to browse $ git checkout <tag-name> # OR checkout and create a new local branch (as you might for a hotfix) $ git checkout -b <new-branch-name> <tag-name> Or, if someone is browsing on github, there is also a "tags" tab in the "branches" dropdown. Using the git-flow extension (recommended) My favorite way to use this model is with the git flow extension for git. ( Edit: Louis has recommended the AVH fork which works better with git describe and might be more active now. Thanks Louis. ) The extension automates all the messy parts (like using merge --no-ff and deleting branches after merging) so that you can get on with your life. For example, with the extension, you can create a feature branch like so: $ git flow feature start my-feature-name and finish it like so $ git flow feature finish my-feature-name The commands for hotfixes and releases are similar, though they use the version number in place of a branch name, like so: # Create hotfix number 14 for this minor version. $ git flow hotfix start 2.4.14 # Create the next release $ git flow release start 2.5.0 Git flow then creates the version tag for you and kindly reminds you to bump the version in any configuration or manifest files (which you could do with a task manager like grunt). Hope that helps :) I'm not sure exactly how you'd integrate it all with your Travis CI setup, but I'm guessing githooks will get you there.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255404", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/115134/" ] }
255,429
According to one page on code.google.com, "left recursion" is defined as follows: Left recursion just refers to any recursive nonterminal that, when it produces a sentential form containing itself, that new copy of itself appears on the left of the production rule. Wikipedia offers two different definitions: In terms of context-free grammar, a non-terminal r is left-recursive if the left-most symbol in any of r’s productions (‘alternatives’) either immediately (direct/immediate left-recursive) or through some other non-terminal definitions (indirect/hidden left-recursive) rewrites to r again. "A grammar is left-recursive if we can find some non-terminal A which will eventually derive a sentential form with itself as the left-symbol." I'm just barely starting out with language creation here, and I'm doing it in my spare time. However when it comes down to selecting a language parser, whether left recursion is supported by this parser or that parser is an issue that immediately comes up front and center. Looking up terms like "sentential form" only leads to further lists of jargon, but the distinction of "left" recursion almost has to be something very simple. Translation please?
A rule R is left-recursive if, in order to find out whether R matches, you first have to find out whether R matches. This happens when R appears, directly or indirectly, as the first term in some production of itself. Imagine a toy version of the grammar for mathematical expressions, with only addition and multiplication to avoid distraction: Expression ::= Multiplication '+' Expression || Multiplication Multiplication ::= Term '*' Term || Term Term ::= Number | Variable As written, there's no left-recursion here — we could pass this grammar to a recursive-descent parser. But suppose you tried to write it this way: Expression ::= Expression '*' Expression || Expression '+' Expression || Term Term ::= Number | Variable This is a grammar, and some parsers can cope with it, but recursive descent parsers and LL parsers can't — because the rule for Expression begins with Expression itself. It should be obvious why in a recursive-descent parser this leads to unbounded recursion without actually consuming any input. It doesn't matter whether the rule refers to itself directly or indirectly; if A has an alternative that starts with B , and B has an alternative that starts with A , then A and B are both indirectly left-recursive, and in a recursive-descent parser their matching functions would lead to endless mutual recursion.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255429", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100669/" ] }
255,696
Just ran across this term here: http://www.codemesh.io/codemesh2014/viktor-klang "We'll demonstrate the Flow API—a lifted representation—as well as a pluggable way of transforming the lifted representation into the execution representation—Flow Materialization." Googling did not help much.
I am not familiar with the Flow API. The term “lifting” comes from category theory. In programming languages such as Haskell or Scala, a lift function takes a function A => B , and somehow performs magic so that the lifted function F[A] => F[B] can be applied to a functor or monad F[A] . A concrete example using Scala's Seq container: Assume we have a function def double(x: Int): Int = 2 * x , and a sequence val xs = Seq(1, 2, 3) . We cannot double(xs) due to incompatible types. But if we obtain a val doubleSeq = liftToSeq(double) , we can do doubleSeq(xs) , which evaluates to Seq(2, 4, 6) . Here, liftToSeq can be implemented as def liftToSeq[A, B](f: A => B): (Seq[A] => Seq[B]) = (seq: Seq[A]) => seq.map(f) The Seq(…) constructor can also be seen as a lifting operation, which lifts the values 1, 2, 3 into a Seq instance, thus allowing us to use list abstractions for these values. Monads allow us to encapsulate the inner workings of some type by offering a watertight but composable interface. Using a lifted representation can make it easier to reason about a computation. Using such abstractions also means that we lose knowledge of the abstracted-away specifics, but those are needed for providing an efficient implementation under the hood (finding a suitable execution representation).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255696", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13154/" ] }
255,708
Automatic documentation generation can be done with a variety of tools, GhostDoc being one of the more prominent. However, by definition, everything it generates is redundant. It takes a look at names of methods, classes, etc. and outputs English that might explain them more verbosely. In the best case, it does what the reader could already do in their head (examples taken from here ): /// <summary> /// Initializes a new instance of the <see cref="Person"/> class. /// </summary> public Person() ... In the worst, it can actually end up generating bizarre documentation that is actually misleading in its attempt to heuristically figure out the meaning of names: /// <summary> /// Riches the text selection changed. /// </summary> /// <param name="richTextBox">The rich text box.</param> private void RichTextSelection_Changed(System.Windows.Controls.RichTextBox richTextBox) ... It seems that the attitude with GhostDoc is, "it's intrinsically better to have some kind of formal XML documentation", but when that documentation is 100% redundant, why? Isn't it just wasting a ton of space at best? At my workplace, we have to document everything, and almost always with GhostDoc's auto-generated docs. Do you do this, and are there any rational reasons not to simply leave code undocumented if you aren't going to actually write the documentation yourself?
In a statically typed language, Javadoc-style documentation is not for the authors, it's for the consumers. Autogeneration simply makes it easier for the authors to maintain the documentation for other people to consume. If you're using a statically typed language and are not writing a library for third party consumption, autogeneration doesn't buy you much, and in my experience is rarely used. If you're using a dynamically typed language, javadoc-style documentation is often used to document the types, even for internal use only, but autogeneration doesn't know the types, so all it saves you is avoiding manual copying of the boilerplate. Either way, don't think of autogeneration as producing a finished product. Think of it as producing the boilerplate for you, so any changes you make manually are significant.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255708", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125671/" ] }
255,757
I'm struggling with how to keep track of what myself and people on my team actually do each day. I get a good broad picture by going over completed cards each week and stand-ups help a bit, but I feel like I don't have a good handle on the day-to-day workings of my team. Cards will stay in progress for days on end without an update at the daily stand-up, and some engineers are my team aren't the most communicative. I've thought about implementing some sort of daily record that everyone fills out (via a mailing list or a shared google doc) but this seems fairly cumbersome and manual. Monitoring GitHub activity does an ok job but can be a little bit overwhelming with how many emails it sends out everyday. I've thought about trying to build a digest system for it, but don't have the time to spare. What strategies have you implemented to stay on top of what your team is doing everyday so that you can measure work on "in progress" tasks?
I talk to them. Technology cannot solve social problems. You have short morning standups. What did you do yesterday? What will you do today? Any impediments? If something sounds fishy (or I'm curious), I stop and ask questions: "You were working on XYZ yesterday, how'd that turn out?". This forces people to pay attention, and to actually know what's going on. It also keeps you the team lead in the loop (and paying attention, and knowing actually what's going on). This needs to be on time, and short (10 minutes max ). Anything else and people won't "shelve" work. They'll stop and wait for the standup and then take time to get started again. Some will do that anyways, but it's largely unavoidable. Then I stop by everyone's desk in the afternoon. Not every afternoon (though it might be more than every afternoon for new people), not at the same time, but around the same time (so it's both informal, and regular). "Any problems? Any impediments?" You'll be surprised how often you'll encounter problems when people are one on one. If people have no problems, great; get back to work. If they don't have problems all week ? Problem. You're not challenging them enough, or they're not opening up. Ask how XYZ (that they mentioned in standup) is going. Make them explain things. This isn't micromanagement. You're not telling them how to do their jobs. You're not babysitting them. You're there to remove impediments from their day to day life. You need information to do that. As long as you keep your team out of meetings, and project managers out of their cubes, then one person stopping by to help once a day isn't going to cause them grief. But all these interactions need to come from the "I am here to help you" vein. Another thing I will do is review changesets (by myself, informally). I can then see how frequently people check in, how large their changesets are, how that matches what they reported, how often they re-do things, how many bugfixes they have, and so on. A work item changing status to "done" is nearly meaningless. Look at the code. Does it seem done? note: One extremely serious side point: how big is your team? Is it more than 7 people? Of course you won't be able to keep track of everything going on if your team is too big.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255757", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/148629/" ] }
255,817
I don't know if there are any accepted names for these patterns (or anti-patterns), but I like to call them what I call them here. Actually, that would be Question 1: What are accepted names for these patterns, if any? Suppose there is a method that accepts a bunch of parameters, and you need to check for invalid input before executing actual method code: public static void myMethod (String param1, String param2, String param3) Hurdle Style I call it so because it's like hurdles a track runner has to jump over to get to the finish line. You can also think of them as conditional barriers. { if (param1 == null || param1.equals("")) { // some logging if necessary return; // or throw some Exception or change to a default value } if (param2 == null || param2.equals("")) { // I'll leave the comments out return; } if (param3 == null || param3.equals("")) { return; } // actual method code goes here. } When the checks are for a certain small section in a larger method (and the section cannot be moved to a smaller private method), labelled blocks with break statements can be used: { // method code before block myLabel: { if (param1 ... // I'll leave out the rest for brevity break myLabel; if (param2 ... break myLabel; ... // code working on valid input goes here } // 'break myLabel' will exit here // method code after block } Fence Style This surrounds the code with a fence that has a conditional gate that must be opened before the code can be accessed. Nested fences would mean more gates to reach the code (like a Russian doll). { if (param1 != null && !param1.equals("")) { if (param2 != null && !param2.equals("")) { if (param3 != null && !param3.equals("")) { // actual method code goes here. } else { // some logging here } } else { // some logging here } } else { // some logging here } } It could be re-written as follows too. The logging statements are right beside the checks, rather than being after the actual method code. { if (param1 == null || param1.equals("")) { // some logging here } else if (param2 == null || param2.equals("")) { // some logging here } else if (param3 == null || param3.equals("")) { // some logging here } else { // actual method code goes here. } } Question 2: Which style is better, and why? Question 3: Are there any other styles? I personally prefer hurdle style because it looks easier on the eyes and does not keep indenting the code to the right every time there's a new parameter. It allows intermittent code between checks, and it's neat, but it's also a little difficult to maintain (several exit points). The first version of fence style quickly gets really ugly when adding parameters, but I suppose it's also easier to understand. While the second version is better, it can be broken accidentally by a future coder, and does not allow intermittent code between conditional checks.
Your first “hurdle” style is superior in any respect. It saves you many levels of indentation. This alone makes it much more straightforward to understand. It's diff friendly: Adding another constraint doesn't require you to re-indent all your code. Validation is located at one place – at the top of the function. With the other style, it is before and after the main code. Putting all validation code into one place reduces cognitive load. Multiple exit points don't matter very much in a language with garbage collection. Striving for a single return is a relic from olden times where everything had to be deallocated manually.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255817", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/103038/" ] }
255,873
I'm taking intermediate data structures course as a prereq for entry into the CS MS program at a University everyone in America has heard of. One line of code that was written in class caught my eye: if (a > 33 | b++ < 54) {...} This would not pass a code review at my workplace. If you wrote code like this in an interview, this would be a significant strike against you. (In addition to being a conditional with side effects, it's being clever at the expense of clarity.) In fact, I've never seen a conditional with side effects, and Googling doesn't turn up much, either. Another student stayed behind after class to ask about it, too, so I'm not the only one who thought this was weird. But the professor was pretty adamant that this was acceptable code, and that he would write something like that at work. (His FT job is as a Principal SWE at a company you've all heard of.) I cannot imagine a world in which this line of code would ever be acceptable, let alone desirable. Am I wrong? Is this OK? What about the more general case: conditionals with side effects? Are those ever OK?
There is one semi-conditional side effect I can think of that is okay: while(iter.MoveNext()) That said, I think this falls mostly into the " never is a really big qualifier" category. I can think of a few rare cases where I've seen it be acceptable, but in general this is vile and to be avoided. I also cannot think of a scenario where that particular line would be acceptable, but I also cannot think of a scenario where that particular line would be useful , so it is hard to imagine the context it is in.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255873", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62858/" ] }
255,915
I am building a .NET 4.5 C# Web API RESTful solution and I would like someone to tell me if my project solution is correct and/or wise(-enough?) for a solution designed using Domain Driven Design, please. The solution has been split into 6 projects: /Base (Not referenced by anything) The web project and forms the interface between the solution and the outside world. Contains the Web API controllers. Contains almost no logic beyond gathering values from request objects and asking the BizApi layer for work. /Biz.Api (Referenced by Base]) Provides the domain services and allows the /Base interface project to have access to the domain business logic objects in the /Biz.Domain project. /Biz.Domain (Referenced by Biz.Api) Provides the domain classes for the Biz.Api layer. These provide methods to manipulate the data of the business in memory. /Dal.Db (Referenced by Biz.Api) The database repository layer. Accesses the databases and maps returned data into internal DTO's defined in the /Interfaces layer. /Dal.Services (Referenced by Biz.Api) Provides a proxy layer to external dependencies like web services and maps their returned data to internal DTOs defined in the /Interfaces project. /Interfaces (Referenced by most projects above) Contains the DTO classes for passing data around the solution and the C# interfaces to define contracts for things like IoC.
This folder structure is inspired by the famous Implementing domain driven design book by Vaugh Vernon. Solution: ├ WebService (REST Services reside here) ├ WebServiceTests ├ Application (Application services reside here) ├ ApplicationTests ├ Domain (Entities, VO, Domain services, domain factories, specifications, domain events, Repositories interfaces, infrastructures services interfaces) ├ DomainTests ├ Infrastructure (Repositories, Infrastructure services impl., Adapters to external services) └ InfrastructureTests I start with a Solution then create four projects for each layer in my application then another four projects for each layer tests. Don't create a folder interfaces or services in your domain layer instead related classes should be grouped by functionality in modules.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255915", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/95114/" ] }
255,944
I am in the process of trying to sell my organisation on the value of code reviews. I have worked at several places where they were employed. I have seen them used to nitpick styling choices, and functional decisions, and I have seen them used as nothing more than a gut check to make sure nothing dangerous is being implemented. My gut feeling is that the most effective purpose is somewhere between the two options. So, what is the purpose of a Code Review?
There are multiple reasons why you would want to conduct a code review: Education of other developers. Ensure that everyone sees the modification associated with a defect fix or enhancement so that they can understand the rest of the software. This is especially useful when people are working on components that need to be integrated or on complex systems where one person may go for long periods without looking at certain modules. Finding defects or opportunities for improvement. Both the deliverable code as well as test code and data can be examined to find weaknesses. This ensures that the test code is robust and valid and that the design and implementation is consistent across the application. If there needs to be additional changes made, it catches the opportunity closer to the point of entry. There are several business cases for conducting reviews: Finding defects or issues that would need to be reworked closer to their injection. This is cheaper. Shared understanding of the system and cross-training. Less time for a developer to come up to speed to make changes. Identification of possible enhancements to the system. Opening up the implementation to ensure that testers are providing adequate coverage. Turning a black box into a grey box or white box from a testing perspective. If you're looking for a comprehensive discussion on the benefits of and implementation strategies for peer reviews, I'd recommend checking out Peer Reviews in Software: A Practical Guide by Karl Wiegers .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/255944", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26304/" ] }
256,090
I just found this function in the project I'm working at: -- Just returns the text unchanged. -- Note: <text> may be nil, function must return nil in that case! function Widget:wtr(text) return text end Too sad, the coder does not work in the company anymore. Why would one make a function that does nothing, but returns the parameter it's called with? Is there any use to such a function, not specified to this example, but overall in any case? Due to function aFunction(parameter) return parameter end Ends in aFunction(parameter) == parameter Why would I write something like aFunction(parameter) == whatIWantToCheck instead of parameter == whatIWantToCheck ?
Your question is sort of like asking what's the good of the number zero if whenever you add it to something you get the same value back. An identity function is like the zero for functions. Kind of useless by itself, but occasionally useful as part of an expression using higher-order functions, where you can take a function as a parameter or return it as a result. That's why most functional programming languages have an id or identity in their standard library. Put another way, it makes a handy default function. Just like you might want to set offset = 0 as a default integer, even though that makes an offset do nothing, it might come in handy to be able to set filterFunction = identity as a default function, even though that makes the filter do nothing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256090", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/148569/" ] }
256,265
Programming languages like Scheme (R5RS) and Python ( see this Question ) round towards the nearest even integer when value is exactly between the surrounding integers. What is the reasoning behind this? Is there a mathematical idea that makes following calculations easier to reason about? (R5RS references the IEEE floating point standard as source of this behaviour.)
It's called banker's rounding. The idea is to minimize the cumulative error from many rounding operations. Lets say you always rounded .5 down. Think of all those little interest payments, the bank pocketing half a cent each time... Lets say you always rounded .5 up. Accounting is going to scream because you're paying out more interest than you should.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256265", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73649/" ] }
256,272
I use exceptions to catch problems early. For example: public int getAverageAge(Person p1, Person p2){ if(p1 == null || p2 == null) throw new IllegalArgumentException("One or more of input persons is null"). return (p1.getAge() + p2.getAge()) / 2; } My program should never pass null in this function. I never intend it to. However as we all know, unintended stuff happens in programming. Throwing an exception if this problem occurs, allows me to spot and fix it, before it causes more problems in other places in the program. The exception stops the program and tells me "bad stuff happened here, fix it". Instead of this null moving around the program causing problems elsewhere. Now, you're right, in this case the null would simply cause a NullPointerException right away, so it might not be the best example. But consider a method such as this one for example: public void registerPerson(Person person){ persons.add(person); notifyRegisterObservers(person); // sends the person object to all kinds of objects. } In this case, a null as the parameter would be passed around the program, and might cause errors much later, which will be hard to trace back to their origin. Changing the function like so: public void registerPerson(Person person){ if(person == null) throw new IllegalArgumentException("Input person is null."); persons.add(person); notifyRegisterObservers(person); // sends the person object to all kinds of objects. } Allows me to spot the problem much before it causes weird errors in other places. Also, a null reference as a parameter is only an example. It could be many kinds of problems, from invalid arguments to anything else. It's always better to spot them early. So my question is simply: is this good practice? Is my use of exceptions as problem-preventing tools good? Is this a legitimate application of exceptions or is it problematic?
Yes, "fail early" is a very good principle, and this is simply one possible way of implementing it. And in methods that have to return a specific value, there isn't really very much else you can do to fail deliberately - it's either throwing exceptions, or triggering assertions. Exceptions are supposed to signal 'exceptional' conditions, and detecting a programming error certainly is exceptional.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256272", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/121368/" ] }
256,352
When unit testing functions of a class that has private functions that require online functionality. How would one go about testing it? For example: public class Foo { public int methodA() { int val = goOnlineToGetVal(); return val; } private int goOnlineToGetVal() { CloudService c = new CloudService(); int oval = c.getValueFromService(); return oval; } } If I were to test function: 'methodA()' it would attempt to use 'goOnlineToGetVal()' which would in turn try to go online however, if this test was done without functionality. How would I go about 100% Class coverage without going online?
new CloudService() And there's your problem. Modern OO design recommends that this sort of dependency be passed in rather than constructed directly. This can be passed into the function itself, or to the class at construction time. It could also be grabbed or aggregated by an Inversion of Control container if that sort of complexity is warranted. At that point, it becomes fairly trivial to pass in a mock/fake service to provide you with your "online" data during testing. Better yet, it allows your software to be sufficiently flexible, so that you can quickly adapt should some (governmental?) client comes along and doesn't want to use the cloud for their values. Or you want to dump one cloud provider for another. Or...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256352", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/149309/" ] }
256,384
I tried to explain to a coworker the gravity of having duplicate code in a project, on this piece of code: + (void)createIapInParse:(SKPaymentTransaction *)transaction { Reachability *reach = [Reachability reachabilityWithHostname:@"www.facebook.com"]; if ([Social getFBUser]) { NSString *iapId = [Util getBundleNameFromIdentifier:transaction.payment.productIdentifier]; PFObject *iap = [PFObject objectWithClassName:@"Iap"]; iap[@"iapId"] = iapId == nil ? [NSNull null] : iapId; iap[@"userId"] = [Social getFBUser].objectID == nil ? [NSNull null] : [Social getFBUser].objectID; iap[@"email"] = [Social getFBUser][@"email"] == nil ? [NSNull null] : [Social getFBUser][@"email"]; iap[@"country"] = [Util getDeviceCountry]; iap[@"installationId"] = [Util getInstallationId]; iap[@"score"] = [NSNumber numberWithLong:[GameScore getTotalScore]]; NSTimeInterval interval = [[NSUserDefaults standardUserDefaults] doubleForKey:@"timeSpentInApp"]; iap[@"timeSpentInApp"] = [NSNumber numberWithDouble:interval]; iap[@"retries"] = @0; iap[@"transactionIdentifier"] = transaction.transactionIdentifier; iap[@"transactionDate"] = transaction.transactionDate; iap[@"transactionSource3G"] = [NSNumber numberWithBool:[reach isReachableViaWWAN]]; [iap saveInBackgroundWithBlock:^(BOOL succeded, NSError *error) { NSString *query =[NSString stringWithFormat: @"...", iap.objectId, transaction.payment.productIdentifier]; [ZeeSQLiteHelper executeQuery:query]; }]; NSLog(@"Save in parse: %@", iap); } else { NSString *iapId = [Util getBundleNameFromIdentifier:transaction.payment.productIdentifier]; PFObject *iap = [PFObject objectWithClassName:@"Iap"]; iap[@"iapId"] = iapId == nil ? [NSNull null] : iapId; iap[@"userId"] = @"-"; iap[@"email"] = @"-"; iap[@"country"] = [Util getDeviceCountry]; iap[@"installationId"] = [Util getInstallationId]; iap[@"score"] = [NSNumber numberWithLong:[GameScore getTotalScore]]; NSTimeInterval interval = [[NSUserDefaults standardUserDefaults] doubleForKey:@"timeSpentInApp"]; iap[@"timeSpentInApp"] = [NSNumber numberWithDouble:interval]; iap[@"retries"] = @0; iap[@"transactionIdentifier"] = transaction.transactionIdentifier; iap[@"transactionDate"] = transaction.transactionDate; iap[@"transactionSource3G"] = [NSNumber numberWithBool:[reach isReachableViaWWAN]]; [iap saveInBackgroundWithBlock:^(BOOL succeded, NSError *error) { NSString *query =[NSString stringWithFormat: @"...", iap.objectId, transaction.payment.productIdentifier]; [ZeeSQLiteHelper executeQuery:query]; }]; NSLog(@"Save in parse: %@", iap); } } I refactored it to: + (void)createIapInParse:(SKPaymentTransaction *)transaction { Reachability *reach = [Reachability reachabilityWithHostname:@"www.facebook.com"]; NSString *iapId = [Util getBundleNameFromIdentifier:transaction.payment.productIdentifier]; PFObject *iap = [PFObject objectWithClassName:@"Iap"]; NSTimeInterval interval = [[NSUserDefaults standardUserDefaults] doubleForKey:@"timeSpentInApp"]; iap[@"iapId"] = iapId == nil ? [NSNull null] : iapId; iap[@"country"] = [Util getDeviceCountry]; iap[@"installationId"] = [Util getInstallationId]; iap[@"score"] = [NSNumber numberWithLong:[GameScore getTotalScore]]; iap[@"timeSpentInApp"] = [NSNumber numberWithDouble:interval]; iap[@"retries"] = @0; iap[@"transactionIdentifier"] = transaction.transactionIdentifier; iap[@"transactionDate"] = transaction.transactionDate; iap[@"transactionSource3G"] = [NSNumber numberWithBool:[reach isReachableViaWWAN]]; if ([Social getFBUser]) { iap[@"userId"] = [Social getFBUser].objectID == nil ? [NSNull null] : [Social getFBUser].objectID; iap[@"email"] = [Social getFBUser][@"email"] == nil ? [NSNull null] : [Social getFBUser][@"email"]; } else { iap[@"userId"] = @"-"; iap[@"email"] = @"-"; } [iap saveInBackgroundWithBlock:^(BOOL succeded, NSError *error) { NSString *query =[NSString stringWithFormat: @"...", iap.objectId, transaction.payment.productIdentifier]; [ZeeSQLiteHelper executeQuery:query]; }]; NSLog(@"Saved in parse: %@", iap); } and he kept arguing with me that it's the same thing. His primary argument was that "he's a good programmer and he can understand and read even the first version fast enough" so he doesn't care if it's written like that or not. The question is, am I really wrong? Is this not so important for everyone? It's a subjective matter or he doesn't understand that's not the same thing?
You're not wrong. It's just psychologically very difficult to convince people of their own limitations. The reason we have invented maxims, guidelines etc. that restrict what we should do is that we have found, over time, that behaving in a particular way leads to more success. Importantly, it will lead to more success even if it doesn't seem so to us at the moment . If you think about it for a while, you'll realize that this is so for most if not all maxims. Correct things that are obviously correct are easy to get right. There are no maxims about not using 500-letter variable names because virtually everybody can see immediately that they would be really hard to use. But duplicated code doesn't seem to have any negative effect at first; it's easy to paste the thing, and saves time over factoring it away. What's not to like? Once you've encountered this kind of situation, it becomes pretty obvious that refactoring will save time overall, because you've been down this road and know what will happen eventually. Someone who hasn't is not so easily convinced. It is extremely easy to fool yourself into thinking "This is so obvious, I'm not going to forget to attend to it, ever". (In fact, attributing perfect ability to itself may be the main function of the mind, according to some theories.) Therefore the only way of convincing someone else to observe good practices is either to earn their trust, or to accompany them through the same roads that convinced you. The former is a matter of interpersonal skill, and is unfortunately too complex to answer in this format. The latter involves stepping through code where exactly this problem happens; in my experience this is a lot more convincing when done with existing code where cut-and-paste actually did lead to errors (the more serious the better) than with purely hypothetical future scenarios. Above all, avoid attacking someone's sense of self-worth directly. Almost every young coder thinks they can do what no one else can do, and don't need the safety devices that others need. Telling them "yes, you do!" rarely works; it is marginally better to show them proof of how many people have screwed up for precisely this reason and ask them to show regard for whoever will be required to maintain their code in the future. (You may be surprised that human-factor issues play such a big role in productive coding, but I've come to realize more and more how decisive they are, and that this is one reason why building complex systems is so deceptively hard.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256384", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/146874/" ] }
256,450
As we all know: Git gets easier once you understand branches are homeomorphic endofunctors mapping submanifolds of a Hilbert space Which seems like jargon, but on the other hand, All told, a monad in X is just a monoid in the category of endofunctors of X, with product × replaced by composition of endofunctors and unit set by the identity endofunctor. is funny because it is true . Can I avoid merging mistakes by reading this simple text ?
It's a joke, that is based on the monad joke, but without actually getting the monad joke. The monad joke is funny on three levels: it tries to explain abstract mathematical jargon with even more mathematical jargon, which is even more abstract however, the explanation is actually correct and once you dive deeper into category theory, you will actually start to see monads as "just a monoid in the category of endofunctors" The Git thing, however, is just random gibberish. It is meant to resemble the monad joke, and might also be a jab at the darcs patch theory, but fundamentally, the person making the joke didn't understand the monad joke. Sources: This is the original tweet containing the quote : Wil Shipley (‏@wilshipley) : Sweet god I hate git. Isaac Wolkerstorfer (‏@agnoster) : @wilshipley git gets easier once you get the basic idea that branches are homeomorphic endofunctors mapping submanifolds of a Hilbert space. And this is a comment on Quora by the original author of the tweet : To confirm what Leo said, it was intended as a joke. […] It was intended as firmly tongue-in-cheek. I actually love git, and I think its complexity is greatly overblown. At the same time, I'm sympathetic to the fact that advice from git gurus to novices can end up sounding like inscrutable gibberish. It's not intended to have any deeper meaning. […] The Leo he is referring to is another answerer in the same thread, a mathematician, who basically explains why that is nonsense. (Hilbert spaces are continuous, patches and branches are discrete.) He also explains that he was inspired by this blog post (A Guide to GIT using spatial analogies) , which actually does make sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256450", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9865/" ] }
256,674
What is the purpose of IOC Containers? The combined reasons for it can be simplified to the following: When using OOP/SOLID Development principles, Dependency Injection gets messy. Either you have the top-level entry points managing dependencies for multiple levels below themselves and passing dependencies recursively through construction, or you have somewhat duplicate code in factory/builder patterns and interfaces that build dependencies as you need them. There is no OOP/SOLID way to perform this AND have super pretty code. If that previous statement is true, then how do IOC Containers do it? As far as I know, they aren’t employing some unknown technique that can't be done with manual D.I. So the only explination is that IOC Containers break OOP/SOLID Principles by using static objects private accessors. Do IOC Containers break the following principles behind the scenes? This is the real question since I have a good understanding, but have a feeling somebody else has a better understanding: Scope control . Scope is the reason for nearly every decision I make on my code design. Block, Local, Module, Static/Global. Scope should be very explicit, as much at block-level and as few at Static as possible. You should see declarations, instantiations, and lifecycle endings. I trust the language and GC to manage scope as long as I’m explicit with it. In my research I’ve found that IOC Containers set up most or all dependencies as Static and control them through some AOP implementation behind the scenes. So nothing is transparent. Encapsulation . What is the purpose of encapsulation? Why should we keep private members so? For practical reasons it is so implementors of our API can’t break the implementation by changing the state (which should be managed by the owner class). But also, for security reasons, it’s so injections can’t occur that overtake our member states and bypass validation and class control. So anything (Mocking frameworks or IOC frameworks) that somehow injects code before compile time to allow external access to private members is pretty huge. Single Responsibility Principle . On the surface, IOC Containers seem to make things cleaner. But imagine how you would accomplish the same thing without the helper frameworks. You would have constructors with a dozen or so dependencies being passed in. That doesn’t mean cover it up with IOC Containers, it is a good thing! It’s a sign to re-factor your code and follow SRP. Open/Closed . Just like SRP isn’t Class-Only (I apply SRP down to single-responsibility lines, let alone methods). Open/Closed is not just a high level theory to not alter the code of a class. It’s a practice of understanding the configuration of your program and having control over what gets altered and what gets extended. IOC Containers can change the way your classes work altogether partially because: a. The main code isn’t making the determination of switching out dependencies, the framework configuration is. b. The scope could be altered at a time that isn’t controlled by the calling members, it is instead determined externally by a static framework. So the configuration of the class isn’t really closed is it, it alters itself based on the configuration of a third party tool. The reason this is a question is because I am not necessarily a master of all IOC Containers. And while the idea of an IOC Container is nice, they appear to just be a façade that covers up poor implementation. But in order to accomplish external Scope control, Private member access, and Lazy loading, a lot of non-trivial questionable things have to go on. AOP is great, but the way it is accomplished through IOC Containers is also questionable. I can trust C# and the .NET GC to do what I expect it to. But I can’t put that same trust in a third party tool that is altering my compiled code to perform workarounds to things I wouldn’t be able to do manually. E.G.: Entity Framework and other ORMs create strongly Typed objects and map them to database entities, as well as provide boiler-plate basic functionality to perform CRUD. Anybody could build their own ORM and continue to follow OOP/SOLID Principles manually. But those frameworks help us so we don’t have to re-invent the wheel every time. Whereas IOC Containers, it seems, help us purposely work around OOP/SOLID Principles and cover it up.
I'll go through your points numerically, but first, there's something you should be very careful of: don't conflate how a consumer uses a library with how the library is implemented . Good examples of this are Entity Framework (which you yourself cite as a good library) and ASP.NET's MVC. Both of these do an awful lot under the hood with, for example, reflection, which would absolutely not be considered good design if you spread it through day-to-day code. These libraries are absolutely not "transparent" in how they work, or what they're doing behind-the-scenes. But that's not a detriment, because they support good programming principles in their consumers . So whenever you talk about a library like this, remember that as a consumer of a library, it's not your job to worry about its implementation or maintenance. You should only worry about how it helps or hinders the code you write which uses the library. Don't conflate these concepts! So, to go through by point: Immediately we reach what I can only assume is an example of the above. You say that IOC containers set up most dependencies as static. Well, perhaps some implementation detail of how they work includes static storage (though given that they tend to have an instance of an object like Ninject's IKernel as their core storage, I doubt even that). But this is not your concern! Actually, an IoC container is just as explicit with scope as poor man's dependency injection. (I'm going to keep comparing IoC containers to poor man's DI because comparing them to no DI at all would be unfair and confusing. And, to be clear, I'm not using "poor man's DI" as a pejorative.) In poor man's DI, you'd instantiate a dependency manually, then inject it into the class that needs it. At the point where you construct the dependency, you'd choose what to do with it- store it in a locally scoped variable, a class variable, a static variable, don't store it at all, whatever. You could pass the same instance into lots of classes, or you could create a new one for each class. Whatever. The point is, to see which is happening, you look to the point- probably near the application root- where the dependency is created. Now what about an IoC container? Well, you do exactly the same! Again, going by Ninject terminology, you look at where the binding is set up, and find whether it says something like InTransientScope , InSingletonScope , or whatever. If anything this is potentially clearer, because you have code right there declaring its scope, rather than having to look through a method to track what happens to some object (an object may be scoped to a block, but is it used multiple times in that block or just once, for example). So maybe you're repelled by the idea of having to use a feature on the IoC container rather than a raw language feature to dictate scope, but as long as you trust your IoC library- which you should!- there's no real disadvantage to it. I still don't really know what you're talking about here. Do IoC containers look at private properties as part of their internal implementation? I don't know why they would, but again, if they do, it's not your concern how a library you're using is implemented. Or, maybe, they expose a capability like injecting into private setters? Honestly I've never encountered this, and I'm dubious about whether this really is a common feature. But even if it's there, it's a simple case of a tool that can be misused. Remember, even without an IoC container, it's but a few lines of Reflection code to access and modify a private property. It's something you should almost never do, but that doesn't mean .NET is bad for exposing the capability. If somebody so obviously and wildly misuses a tool, it's the person's fault, not the tool's. The ultimate point here is similar to 2. The vast majority of the time, IoC containers DO use the constructor! Setter injection is offered for very specific circumstances where, for particular reasons, constructor injection can't be used. Anybody who uses setter injection all the time to cover up how many dependencies are being passed in is massively absuing the tool. That's NOT the tool's fault, it's theirs. Now, if this was a really easy mistake to innocently make, and one that IoC containers encourage, then okay, maybe you'd have a point. It'd be like making every member of my class public then blaming other people when they modify things they shouldn't, right? But anybody who uses setter injection to cover up violations of SRP is either willfully ignoring or completely ignorant of basic design principles. It's unreasonable to lay the blame for this at the IoC container. That's especially true because it's also something you can do just as easily with poor man's DI: var myObject = new MyTerriblyLargeObject { DependencyA = new Thing(), DependencyB = new Widget(), Dependency C = new Repository(), ... }; So really, this worry seems completely orthogonal to whether or not an IoC container is used. Changing how your classes work together is not a violation of OCP. If it was, then all dependency inversion would be encouraging a violation of OCP. If this was the case, they wouldn't both be in the same SOLID acronym! Furthermore, neither points a) nor b) come close to having anything to do with OCP. I don't even know how to answer these with respect to OCP. The only thing I can guess is that you think OCP is something to do with behaviour not being altered at runtime, or about where in the code the lifecycle of dependencies is controlled from. It's not. OCP is about not having to modify existing code when requirements are added or change. It's all about writing code, not about how code you've already written is glued together (though, of course, loose coupling is an important part of achieving OCP). And one final thing you say: But I can’t put that same trust in a third party tool that is altering my compiled code to perform workarounds to things I wouldn’t be able to do manually. Yes, you can . There is absolutely no reason for you to think that these tools- relied on by vast numbers of projects- are any more buggy or prone to unexpected behaviour than any other third-party library. Addendum I just noticed your intro paragraphs could use some addressing too. You sardonically say that IoC containers "aren’t employing some secret technique we’ve never heard of" to avoid messy, duplication-prone code to build a dependency graph. And you're quite right, what they're doing is actually addressing these things with the same basic techniques as we programmers always do. Let me talk you through a hypothetical scenario. You, as a programmer, put together a large application, and at the entry point, where you're constructing your object graph, you notice you have quite messy code. There are quite a few classes that are used again and again, and every time you build one of those you have to build the whole chain of dependencies under them again. Plus you find you don't have any expressive way of declaring or controlling the lifecycle of dependencies, except with custom code for each one. Your code is unstructured and full of repetition. This is the messiness you talk about in your intro paragraph. So first, you start to refactor a bit- where some repeated code is structured enough you pull it out into helper methods, and so on. But then you start to think- is this a problem that I could perhaps tackle in a general sense, one that isn't specific to this particular project but could help you in all your future projects? So you sit down, and think about it, and decide that there should be a class that can resolve dependencies. And you sketch out what public methods it would need: void Bind(Type interfaceType, Type concreteType, bool singleton); T Resolve<T>(); Bind says "where you see a constructor argument of type interfaceType , pass in an instance of concreteType ". The additional singleton parameter says whether to use the same instance of concreteType each time, or always make a new one. Resolve will simply try to construct T with any constructor it can find whose arguments are all of types which have previously been bound. It can also call itself recursively to resolve the dependencies all the way down. If it can't resolve an instance because not everything has been bound, it throws an exception. You can try implementing this yourself, and you'll find you need a bit of reflection, and some caching for the bindings where singleton is true, but certainly nothing drastic or horrifying. And once you're done- voila , you have the core of your very own IoC container! Is it really that scary? The only real difference between this and Ninject or StructureMap or Castle Windsor or whatever one you prefer is that those have a lot more functionality to cover the (many!) use cases where this basic version wouldn't be sufficient. But at its heart, what you have there is the essence of an IoC container.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256674", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/91341/" ] }
256,833
I'm sure it's not about laziness or anything like that but I fail to understand why developers of even mainly consumer facing apps don't make any sort of installation wizard where you go next-next-finish. The same apps usually have installers for Windows and Mac OS so why not Linux? Is there any technical reason for this trend or is it just convention? EDIT (23-09-2014): This question was not asked to start a Windows vs Linux flame war. I have used all 3 major operating systems and apart from Linux, the other two (Windows and Mac OS) both have installers. I have not installed Oracle yet but whatever I have needed to install, I never saw any GUI installer for Linux. Yes, I know that Linux has package managers so the developers don't "need" to make the installers. But there is still a huge amount of software that is either outdated in the default package managers, or simply not available. Plus, since Linux is sold as an alternative to Windows for casual users (Ubuntu is trying hard in this domain), it would make much more sense to just give the users what they are familiar with. Take for example, setting up a LAMP stack. Those are all open-source software in the default repositories, but can you set-up everything in one go without a script? Now look at the WAMP server in Windows. You just run an installer and it installs multiple software in such a way that they work well with each other. Then it sets up good defaults and stuff. Installers can do that, package managers don't. Yes, you can find a script for that online, but where? And which one? Installers aren't some obsolete technology from the past. They are still useful, and 95% users are already comfortable with them.
Developers just need to provide a package for a distribution. Each distribution then has a way to install this package. This way can be in a terminal ( apt-get ) or via a graphical interface, e.g. Ubuntu Software Center. The beauty is that developers just have to care about building a proper package; the distribution makers take care of the rest, and each package installation has the same process.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/256833", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/129370/" ] }