source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
256,864 | This is the inverse to " Why don't developers make installation wizards on linux? ", which is interesting, but made me think "Automatic installation is the natural way. Why do they use wizards?". So here is the inverse question: I'm sure it's not about laziness, or anything like that, but I fail to understand why developers, of even mainly consumer facing apps, don't make a fully automatic sort of installation where you are not bothered at all. The same apps usually have automatic installation on Linux, so why not Windows and Mac OS? Is there any technical reason for this trend, or is it just convention? | Informed Consent Users should be able to decide, first of all, whether they even want the program to be installed on their computer or not. It may seem self-evident to you that people are obviously choosing to install a program, but the prime characteristic of a malicious program is that it can be installed without the computer user knowing about it. Informed consent is made even more explicit through UAC . License Agreement Most modern software follows a "click-through" model for licensing; that is, the user agrees to the terms of the license during the installation process as a condition of installing the program. That users seldom read these agreements doesn't mean they're not bound by them, especially if they have clicked the checkbox labeled "I agree to these terms." Configuring Options Many software packages have options that allow you to change the way the software is installed in certain ways. The most trivial of these lets you decide whether or not you want an icon on the desktop, but in larger applications you can decide which features you want installed. Installation Progress While programs in the Windows ecosystem are getting better at being less intrusive during the installation process (e.g. registry-free installation), installation is still often a non-trivial operation. Progress bars and other visual aids give an indication that something is actually happening. The final page in the wizard tells you whether or not the installation succeeded. Getting Started Finally, the best software packages tell you what to do next. What are the first steps, how to get started, how to get help. Most software, when installed, leaves you with a startup icon, and that's it. Never overestimate the level of expertise of your users; as incredible as it may seem to you, there are still folks that don't know how to find and start software programs they just installed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/256864",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123366/"
]
} |
257,177 | I have been on numerous teams that try to practice Agile methodologies and often these teams are test centric. Is testing a necessary part of practicing the Agile methodology or is it just a XP practice that has been latched on over the years? | Testing is absolutely essential to agile, primarily because agile is based around incremental improvements: the difficulty is that it can sometimes be hard to see how the current changes will effect your old code. The best way to be confident that you haven't broken something is to test it, and to know HOW to test it. That way you find the bug immediately, not down the road when you have forgotten exactly what you did when you were writing the code that broke some old feature. The reason this is different from more traditional, top-down design type programming is that in that environment its a) very difficult to test until you have the finished product b) in theory you are considering all the design criteria at the same time, and so you are less likely to make a design decision that breaks previous design decisions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257177",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13364/"
]
} |
257,286 | As I painfully try to find a good natural sorting algorithm written in JavaScript, I then stumble upon a bunch of different implementations, interesting blog posts, and answers on StackOverflow. Each implementation provides its technical tricks. However, the more I looked into it the more a question became very clear: "Is there actually any language-agnostic specification regarding natural sorting order of strings?" I mean, if not, then how could one expect to write a piece of code that is actually "correct for everyone" or "agreed on by the community"? I would have expected a specification stating the result of the compromises/decisions made, at least for English, as it is simple (no accents/diacritics)... Note that I wrote "language-agnostic" as I would expect this specification to then be used to implement solutions in different languages, not only in JavaScript , C# , or Java . Resources: Sorting for Humans: Natural Sort Order Alphanum: JavaScript Natural Sorting Algorithm www.davekoelle.com/alphanum.html How to sort strings in JavaScript Natural sort of alphanumerical strings in JavaScript | The algorithms for determining which string comes first when comparing two strings are called collation algorithms and the sort order they produce is called the collation order . Unfortunately, there is no agreed upon global collation order. To make matters worse, the correct sorting order is not only language dependent, but can even differ between different contexts. One example of language difference is that in German the accented characters are ordered immediately after their unaccented counterparts (ö comes immediately after o), but in Swedish the accented characters come right at the end of the alphabet (ö comes after z). And as for usage differences, phone books and dictionaries can have different sort orders. Although there is no global collation order, there are collation orders that generally give a reasonable order independent of the natural language that the words are written in and there are collation algorithms that can be tailored to either give a reasonable sort order or to give the absolute correct order for a given culture and usage. One such algorithm is the "Unicode Collation Algorithm", which can be found at http://www.unicode.org/reports/tr10/ . This algorithm can be tailored for a wide range of collation orders and comes with a default configuration that gives a reasonable ordering for all Unicode codepoints. The algorithm does not depend on any particular programming language. The introduction section of the standard gives a nice overview of the difficulties in correctly collating text. Another algorithm is described in ISO standard 14651 . Besides the various national collation orders, there is also a standardized collation order for the European languages, called the European Ordering Rules (EOR) . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257286",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66688/"
]
} |
257,299 | I had a little debate going on with a coworker. Simply put, is there a good reason to hide/encapsulate functions that are pure? By "pure" I mean the wikipedia definition : Always returns the same results from the same input. (For the sake of this discussion Foo Create(){ return new Foo(); } is considered impure if Foo does not have value semantics.) Does not use mutable state (except local variables) or I/O. Does not produce side effects. | A pure function could still be an implementation detail. Although the function may cause no harm (from the point of view of not breaking important invariants/contracts), by exposing it both the author and the users of that class/module/package lose. The author loses because now he can't remove it even if the implementation changes and the function is no longer useful to him. The users lose because they have to sift through and ignore extra functions that aren't relevant to using the API in order to understand it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257299",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51654/"
]
} |
257,309 | Basically, can I do this, and what actually happens? public class foo
{
public int id;
public void bar()
{
//do stuff
this = null; // ?
//do stuff
foo[] all = otherclass.FindAllFoos();
foreach(foo f in all)
{
if(f.id == 42)
{
this = f; // ?
break;
}
}
//do stuff
}
}; Some specific background that shouldn't matter: The real project has a class that represents a USB device that I'm currently talking to, and just before this point in the program, I tell the device to reboot. When the device comes back, it appears different enough to invalidate this instance, so I have to find it again among all the connected devices and continue talking to it. | A pure function could still be an implementation detail. Although the function may cause no harm (from the point of view of not breaking important invariants/contracts), by exposing it both the author and the users of that class/module/package lose. The author loses because now he can't remove it even if the implementation changes and the function is no longer useful to him. The users lose because they have to sift through and ignore extra functions that aren't relevant to using the API in order to understand it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257309",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147596/"
]
} |
257,325 | Most programming languages (both dynamically and statically typed languages) have special keyword and/or syntax that looks much different than declaring variables for declaring functions. I see functions as just as declaring another named entity: For example in Python: x = 2
y = addOne(x)
def addOne(number):
return number + 1 Why not: x = 2
y = addOne(x)
addOne = (number) =>
return number + 1 Similarly, in a language like Java: int x = 2;
int y = addOne(x);
int addOne(int x) {
return x + 1;
} Why not: int x = 2;
int y = addOne(x);
(int => int) addOne = (x) => {
return x + 1;
} This syntax seems more natural way of declaring something (be it a function or a variable) and one less keyword like def or function in some languages. And, IMO, it is more consistent (I look in the same place to understand the type of a variable or function) and probably makes the parser/grammar a little bit simpler to write. I know very few languages uses this idea (CoffeeScript, Haskell) but most common languages have special syntax for functions (Java, C++, Python, JavaScript, C#, PHP, Ruby). Even in Scala, which supports both ways (and has type inference), it more common to write: def addOne(x: Int) = x + 1 Rather than: val addOne = (x: Int) => x + 1 IMO, atleast in Scala, this is probably the most easily understandable version but this idiom is seldom followed: val x: Int = 1
val y: Int = addOne(x)
val addOne: (Int => Int) = x => x + 1 I am working on my own toy language and I am wondering if there are any pitfalls if I design my language in such a way and if there are any historical or technical reasons this pattern is not widely followed? | I think the reason is that most popular languages either come from or were influenced by the C family of languages as opposed to functional languages and their root, the lambda calculus. And in these languages, functions are not just another value: In C++, C# and Java, you can overload functions: you can have two functions with the same name, but different signature. In C, C++, C# and Java, you can have values that represent functions, but function pointers, functors, delegates and functional interfaces all are distinct from functions themselves. Part of the reason is that most of those are not actually just functions, they are a function together with some (mutable) state. Variables are mutable by default (you have to use const , readonly or final to forbid mutation), but functions can't be reassigned. From a more technical perspective, code (which is composed of functions) and data are separate. They typically occupy different parts of memory, and they are accessed differently: code is loaded once and then only executed (but not read or written), whereas data is often constantly allocated and deallocated and is being written and read, but never executed. And since C was meant to be "close to the metal", it makes sense to mirror this distinction in the syntax of the language too. The "function is just a value" approach that forms the basis of functional programming has gained traction in the common languages only relatively recently, as evidenced by the late introduction of lambdas in C++, C# and Java (2011, 2007, 2014). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257325",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10374/"
]
} |
257,344 | There are countless war stories about how long a compile can take. Even xkcd made a mention of it. Now, I haven't been programming for a long time and have mostly just been exposed to Java and Python (and Python is an interpreted language, not a compiled one). I realize it's possible that I just haven't come across projects that take very long to compile, but even for decent sized apps, it has either been instantaneous for me (usually handled in the background by an IDE) or taking no more than 30 seconds or so for an extremely large project. Even in a business environment (where the comic takes place), I've never had code take that long to compile. Have I just not been exposed to projects with long compile times? Is this a relic of the past that is no longer something that happens in the modern day? Why would a compile take such a long time? | Compilation can take a while, especially for large projects written in languages like C, C++, or Scala. Compiling parts in the background can reduce the compilation time, but occasionally you have to do a fresh compile. Factors that can lead to long compilation times include: Large code size, obviously. Large projects will have hundreds of thousands of lines of code. C's #include preprocessor directive, which effectively causes the same code to be compiled hundreds of times. The macro system has similar issues, as it works on a text-level. The preprocessor really bloats up the code size that's actually passed to the compiler. Looking at a file after preprocessing (e.g. via gcc -E ) should open your eyes. C++'s templates are Turing complete, which means that in theory you can perform arbitrary computations at compile time. Nobody really wants to do that, but even lots of simple cases do add up to quite some time spent specializing the templates. Scala is a fairly young language, and the compiler is horrendously under-optimized. Currently, the compiler uses a very large number of compilation passes (C is designed to require only two compilation passes). Typechecking is one of these passes, and can take some time due to the complicated type system featured by the language. Compilation isn't the only thing that takes time. After the project has been compiled, a test suite should be run. The time spent on this can range from a few seconds to a couple of hours (if the tests are written badly). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257344",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81973/"
]
} |
257,450 | I read code way more often than I write code, and I'm assuming that most of the programmers working on industrial software do this. The advantage of type inference I assume is less verbosity and less written code. But on the other hand if you read code more often, you'll probably want readable code. The compiler infers the type; there are old algorithms for this. But the real question is why would I, the programmer, want to infer the type of my variables when I read the code? Isn't it more faster for anyone just to read the type than to think what type is there? Edit: As a conclusion I understand why it is useful. But in the category of language features I see it in a bucket with operator overloading - useful in some cases but affecting readability if abused. | Let's take a look at Java. Java 8 can't have variables with inferred types. This means I frequently have to spell out the type, even if it is perfectly obvious to a human reader what the type is: int x = 42; // yes I see it's an int, because it's a bloody integer literal!
// Why the hell do I have to spell the name twice?
SomeObjectFactory<OtherObject> obj = new SomeObjectFactory<>(); And sometimes it's just plain annoying to spell out the whole type. // this code walks through all entries in an "(int, int) -> SomeObject" table
// represented as two nested maps
// Why are there more types than actual code?
for (Map.Entry<Integer, Map<Integer, SomeObject<SomeObject, T>>> row : table.entrySet()) {
Integer rowKey = entry.getKey();
Map<Integer, SomeObject<SomeObject, T>> rowValue = entry.getValue();
for (Map.Entry<Integer, SomeObject<SomeObject, T>> col : rowValue.entrySet()) {
Integer colKey = col.getKey();
SomeObject<SomeObject, T> colValue = col.getValue();
doSomethingWith<SomeObject<SomeObject, T>>(rowKey, colKey, colValue);
}
} This verbose static typing gets in the way of me, the programmer. Most type annotations are repetitive line-filler, content-free regurgiations of what we already know. However, I do like static typing, as it can really help with discovering bugs, so using dynamic typing isn't always a good answer. Type inference is the best of both worlds: I can omit the irrelevant types, but still be sure that my program (type-)checks out. While type inference is really useful for local variables, it should not be used for public APIs which have to be unambiguously documented. And sometimes the types really are critical for understanding what's going on in the code. In such cases, it would be foolish to rely on type inference alone. There are many languages that support type inference. For example: C++. The auto keyword triggers type inference. Without it, spelling out the types for lambdas or for entries in containers would be hell. C#. You can declare variables with var , which triggers a limited form of type inference. It still manages most cases where you want type inference. In certain places you can leave out the type completely (e.g. in lambdas). Haskell, and any language in the ML family. While the specific flavour of type inference used here is quite powerful, you still often see type annotations for functions, and for two reasons: The first is documentation, and the second is a check that type inference actually found the types you expected. If there is a discrepancy, there's likely some kind of bug. And since this answer was originally written, type inference has become more popular. E.g. Java 10 has finally added C#-style inference. We're also seeing more type systems on top of dynamic languages, e.g. TypeScript for JavaScript, or mypy for Python, which make heavy use of type inference in order to keep the overhead of type annotations manageable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257450",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57792/"
]
} |
257,507 | I'm building my first MVC application in Visual Studio 2013 (MVC 5) and I'm a bit unclear on the best way to setup my model. I've generated an entity framework model using code-first from an existing database. My first instinct was to create some intermediary classes that would be the model used by the views and have those classes work with the entity framework classes. As I was writing the intermediary classes I realized that I was mostly just re-implementing a lot of the things that the EF classes already did just with the occasional private setter or cast from one datatype to another. So that seemed like a waste. Is the general rule to directly use the entity framework classes as the Model for an MVC application? Or is there some benefit I'm missing for building these intermediary classes? | In my applications I have always separated things out, with different models for the database (Entity Framework) and MVC. I have separated these out into different projects too: Example.Entities - contains my entities for EF and the DB context for accessing them. Example.Models - contains MVC models. Example.Web - web application. Depends on both Example.Domain and Example.Models. Instead of holding references to other objects like the domain entities do, the MVC models hold IDs as integers. When a GET request for a page comes in, the MVC controller performs the database query, which returns an entity. I have written "Converter" methods that take a domain entity and convert it to an MVC model. There are other methods that do the opposite (from an MVC model to a domain entity). The model then gets passed to the view, and thus to the client. When a POST request comes in, the MVC controller gets an MVC model. A converter method converts this to a domain entity. This method also performs any validations that can't be expressed as attributes, and makes sure that if the domain entity already exists that we are updating it rather than getting a new one. The methods usually look something like this: public class PersonConverter
{
public MyDatabaseContext _db;
public PersonEntity Convert(PersonModel source)
{
PersonEntity destination = _db.People.Find(source.ID);
if(destination == null)
destination = new PersonEntity();
destination.Name = source.Name;
destination.Organisation = _db.Organisations.Find(source.OrganisationID);
//etc
return destination;
}
public PersonModel Convert(PersonEntity source)
{
PersonModel destination = new PersonModel()
{
Name = source.Name,
OrganisationID = source.Organisation.ID,
//etc
};
return destination;
}
} By using these methods I take the duplication out that would otherwise occur in each controller. The use of generics can deduplicate things even further. Doing things this way provides multiple benefits: You can customise a model to a specific view or action. Say you have a signup form for a person that when submitted, creates many different entities (person, organisation, address). Without seperate MVC models this will be very difficult. If I need to pass more information to the view than would otherwise be available just in the entity, or combine two entities into a single model, then my precious database models are never touched. If you ever serialise an MVC model as JSON or XML, you only get the immediate model being serialised, not every other entity linked to this one. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257507",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90285/"
]
} |
257,562 | Consider the following design public class Person
{
public virtual string Name { get; }
public Person (string name)
{
this.Name = name;
}
}
public class Karl : Person
{
public override string Name
{
get
{
return "Karl";
}
}
}
public class John : Person
{
public override string Name
{
get
{
return "John";
}
}
} Do you think there is something wrong here? To me Karl and John classes should be just instances instead of classes as they are exactly the same as : Person karl = new Person("Karl");
Person john = new Person("John"); Why would I create new classes when instances are enough? The classes does not add anything to the instance. | There is no need to have specific subclasses for every person. You're right, those should be instances instead. Goal of subclasses: to extend parent classes Subclasses are used to extend the functionality provided by the parent class. For example, you may have: A parent class Battery which can Power() something and can have a Voltage property, And a subclass RechargeableBattery , which can inherits Power() and Voltage , but can also be Recharge() d. Notice that you can pass an instance of RechargeableBattery class as a parameter to any method which accepts a Battery as an argument. This is called Liskov substitution principle , one of the five SOLID principles. Similarly, in real life, if my MP3 player accepts two AA batteries, I can substitute them by two rechargeable AA batteries. Note that it is sometimes difficult to determine whether you need to use a field or a subclass to represent a difference between something. For example, if you have to handle AA, AAA and 9-Volt batteries, would you create three subclasses or use an enum? “Replace Subclass with Fields” in Refactoring by Martin Fowler, page 232, may give you some ideas and how to move from one to another. In your example, Karl and John don't extend anything, nor do they provide any additional value: you can have the exactly same functionality using Person class directly. Having more lines of code with no additional value is never good. An example of a business case What could possibly be a business case where it would actually make sense to create a subclass for a specific person? Let's say we build an application which manages persons working in a company. The application manages permissions too, so Helen, the accountant, cannot access SVN repository, but Thomas and Mary, two programmers, cannot access accounting-related documents. Jimmy, the big boss (founder and CEO of the company) have very specific privileges no one has. He can, for example, shut down the entire system, or fire a person. You get the idea. The poorest model for such application is to have classes such as: because code duplication will arise very quickly. Even in the very basic example of four employees, you will duplicate code between Thomas and Mary classes. This will push you to create a common parent class Programmer . Since you may have multiple accountants as well, you will probably create Accountant class as well. Now, you notice that having the class Helen is not very useful, as well as keeping Thomas and Mary : most of your code works at the upper level anyway—at the level of accountants, programmers and Jimmy. The SVN server doesn't care if it's Thomas or Mary who needs to access the log—it only needs to know whether it's a programmer or an accountant. if (person is Programmer)
{
this.AccessGranted = true;
} So you end up removing the classes you don't use: “But I can keep Jimmy as-is, since there would always be only one CEO, one big boss—Jimmy”, you think. Moreover, Jimmy is used a lot in your code, which actually looks more like this, and not as in the previous example: if (person is Jimmy)
{
this.GiveUnrestrictedAccess(); // Because Jimmy should do whatever he wants.
}
else if (person is Programmer)
{
this.AccessGranted = true;
} The problem with that approach is that Jimmy can still be hit by a bus, and there would be a new CEO. Or the board of directors may decide that Mary is so great that she should be a new CEO, and Jimmy would be demoted to a position of a salesperson, so now, you need to walk through all your code and change everything. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257562",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35086/"
]
} |
257,705 | I know that this is a debated practice, but let's suppose that this is the best option for me. I am wondering about what is the actual technique to do this. The approach that I see is this: 1) Make a friend class that of the class who's method I want to test. 2) In the friend class, create a public method(s) that call the private method(s) of the tested class. 3) Test the public methods of the friend class. Here is a simple example to illustrate the above steps: #include <iostream>
class MyClass
{
friend class MyFriend; // Step 1
private:
int plus_two(int a)
{
return a + 2;
}
};
class MyFriend
{
public:
MyFriend(MyClass *mc_ptr_1)
{
MyClass *mc_ptr = mc_ptr_1;
}
int plus_two(int a) // Step 2
{
return mc_ptr->plus_two(a);
}
private:
MyClass *mc_ptr;
};
int main()
{
MyClass mc;
MyFriend mf(&mc);
if (mf.plus_two(3) == 5) // Step 3
{
std::cout << "Passed" << std::endl;
}
else
{
std::cout << "Failed " << std::endl;
}
return 0;
} Edit: I see that in the discussion following one of the answers people are wondering about my code base. My class has methods that are called by other methods; none of these methods should be called outside the class, so they should be private. Of course they could be put into one method, but logically they are much better separate. These methods are complicated enough to warrant unit testing, and due to performance issues I will very likely have to re-factor these methods, hence it would be nice to have a test to make sure that my re-factoring didn't break anything. I am not the only one working on the team, though I am the only one who is working on this project including the tests. Having said the above, my question was not about whether it is a good practice to write unit tests for private methods, though I appreciate the feedback. | An alternative to friend (well, in a sense) that I use frequently is a pattern that I've come to know as access_by. It's pretty simple: class A {
void priv_method(){};
public:
template <class T> struct access_by;
template <class T> friend struct access_by;
} Now, suppose class B is involved in testing A. You can write this: template <> struct access_by<B> {
call_priv_method(A & a) {a.priv_method();}
} You can then use this specialization of access_by to call private methods of A. Basically, what this does is put the onus of declaring friendship into the header file of the class that wants to call A's private methods. It also lets you add friends to A without changing A's source. Idiomatically, it also indicates to whoever reads the source of A that A does not indicate B a true friend in the sense of extending its interface. Rather, the interface of A is complete as given and B needs special access into A (testing being a good example, I've also used this pattern when implementing boost python bindings, sometimes a function that needs to be private in C++ is handy to expose into the python layer for the implementation). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257705",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/102106/"
]
} |
257,817 | A tutorial (for Javascript) I'm doing suggests we write a function like this one: function sayHello() {
//Some comments explaining the next line
window.alert("Hello");
} Other than obfuscation are there benefits to writing something like this in real life? If so, what are the benefits? | Please pardon my memory if I have this incorrect... Javascript isn't my preferred implementation language. There are several reasons why one would want to have a no arg function wrap another function call. While the simple call to window.alert("Hello"); is something that you could imagine just instead calling directly instead of sayHello() . But what if there is more to it? You've got a dozen places where you want to call sayHello() and have written window.alert("Hello"); instead. Now you want it to do a window.alert("Hello, it is now " + new Date()) . If you wrapped all those calls as sayHello() you change it one place. If you didn't, you change it in a dozen places. This touches on Don't Repeat Yourself . You do it because you don't want to have to do it a dozen times in the future. I worked with an i18n / l10n library in the past that used functions to do client side localization of the text. Consider the sayHello() function. You could have it print out hola when the user is localized to a Spanish language. This could look something like: function sayHello() {
var language = window.navigator.userLanguage || window.navigator.language;
if(language === 'es') { window.alert('Hola'); }
else { window.alert("Hello"); }
} Though, this isn't how the library worked. Instead, it had a set of files that looked like: # English file
greeting = hello # Spanish file
greeting = hola And then the library would detect the browser language setting and then create dynamic functions with the appropriate localization as the return value for a no-arguemnt function call based on the appropriate localization file. I'm not enough of a Javascript coder to say if that is good or bad... just that it was and can be seen as a possible approach. The point being, wrapping the call to another function in a function of its own is often quite useful and helps with the modularization of the application and may also result in easier to read code. All that bit aside, you are working from a tutorial. It is necessary to introduce things as simply as possible at the start. Introducing varargs style function calls from the start can result in some very confusing code for a person who is unfamiliar with coding in general. It is much easier to go from no argument, to arguments, to varargs style - with each building on the previous examples and understanding. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257817",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/59570/"
]
} |
257,868 | When designing an own programming language, when does it make sense to write a converter that takes the source code and converts it to C or C++ code so that I can use an existing compiler like gcc to end up with machine code? Are there projects that use this approach? | Tranlating to C code is a very well established habit. The original C with classes (and the early C++ implementations, then called Cfront ) did that successfully. Several implementations of Lisp or Scheme are doing that, e.g. Chicken Scheme , Scheme48 , Bigloo . Some people translated Prolog to C . And so did some versions of Mozart (and there have been attempts to compile Ocaml bytecode to C ). J.Pitrat's artificial intelligence CAIA system is also bootstrapped and generates all its C code. Vala also translates to C, for GTK related code. Queinnec's book Lisp In Small Pieces have some chapter about translation to C. One of the issues when translating to C is tail-recursive calls . The C standard does not guarantee that a C compiler is translating them properly (to a "jump with arguments", i.e. without eating call stack), even if in some cases, recent versions of GCC (or of Clang/LLVM) do that optimization. Another issue is garbage collection . Several implementations just use the Boehm conservative garbage collector (which is C friendly ...). If you wanted to garbage collect code (like several Lisp implementations do, e.g. SBCL) that might be a nightmare (you would like to dlclose on Posix). Yet another issue is dealing with first-class continuations and call/cc . But clever tricks are possible (look inside Chicken Scheme). Accessing the call-stack could require a lot of tricks (but see GNU backtrace , and libbacktrace etc....). Orthogonal persistence of continuations (i.e. of stacks or threads) would be difficult in C. Exception handling is often a matter to emit clever calls to longjmp etc... You may want to generate (in your emitted C code) appropriate #line directives. This is boring and takes a lot of work (you'll want that to e.g. produce more easily gdb -debuggable code). My obsolete GCC MELT lispy domain specific language (to customize or extend GCC ) is translated to C (actually to poor C++ now). It has its own generational copying garbage collector. (You might be interested by Qish or Ravenbrook MPS ). Actually, generational GC is easier in machine generated C code than in hand-written C code (because you'll tailor your C code generator for your write-barrier and GC machinery). The Bismon static source code analyzer (described in this DRAFT report) also generates C code. I don't know any language implementation translating to genuine C++ code, i.e. using some "compile-time garbage collection" technique to emit C++ code using a lot of STL templates and respecting the RAII idiom. (please tell if you know one). But RefPerSys aims to generate C++ code (or C code, perhaps also machine code). What is funny today is that (on current Linux desktops) C compilers may be fast enough to implement an interactive top level read-eval-print-loop translated to C: you'll emit C code (a few hundred lines) at every user interaction, you'll fork a compilation of it into a shared object, which you would then dlopen . (MELT is doing that all ready, and it is usually fast enough). All this might take a few tenths of a second and be acceptable by end-users. When possible, I would recommend translating to C, not to C++, in particular because C++ compilation is slow. However, C++ has today powerful standard containers , exceptions , λ-expressions , etc .... and is used or required by interesting C++ libraries or frameworks such as Qt , POCO , Tensorflow , and all these features is what motivates the choice of generating C++ code in a pet project of mine called RefPerSys . If generating C++ dynamically, either accept to wait more than a second for compiling every generated C++ file (e.g. into a temporary plugin , see for Linux the C++ dlopen mini howto ) or use clever tricks (e.g. ccache and/or GCC pre-compiled headers , etc....) while minimizing if possible the total amount of #include -d material) to decrease the C++ compilation time. If you are implementing your language, you might also consider (instead of emitting C code) some JIT libraries like libjit , GNU lightning , asmjit , or even LLVM or GCCJIT . If you want to translate to C, you might sometimes use tinycc : it compiles very quickly the generated C code (even in memory) to slow machine code. But in general you want to take advantage of the optimizations done by a real C compiler like GCC If you translate to C your language, be sure to build the entire AST of the generated C code in memory first (this also makes easier to generate first all the declarations, then all the definitions and function code). You would be able to do some optimizations/normalizations this way. Also, you could be interested in several GCC extensions (e.g. computed gotos). You'll probably want to avoid generating huge C functions - e.g. of a hundred thousands line of generated C - (you'll better split them into smaller pieces) since optimizing C compilers are very unhappy with very big C functions (in practice, and experimentally, gcc -O compilation time of large functions is proportional to the square of the function code size). So limit the size of your generated C functions to a few thousand lines each. Notice that both Clang (thru LLVM ) and GCC (thru libgccjit ) C & C++ compilers offer some way to emit some internal representations suited for these compilers, but doing so might (or not) be harder than emitting C (or C++) code, and is specific to each compiler.
Also, recent GCC are extensible thru dlopen -ed plugins . If designing a language to be translated to C, you probably want to have several tricks (or constructs) to generate a mixture of C with your language. My DSL2011 paper MELT: a Translated Domain Specific Language
Embedded in the GCC Compiler should give you useful hints. Generating C code might be done with GNU m4 (like GNU autoconf does). But you could also consider using GNU guile , Bigloo , SBCL , Ocaml , Python , GNU bison , GPP , etc... when coding your own C code generator. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257868",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94978/"
]
} |
257,881 | I'm writing some test code for a feature which processes PDF files. The basic idea behind the tests is that I point them towards some PDFs I've selected specially, they process them and I check that the output is what I expect. My question is: where should I be storing these large-ish PDFs? Should I check them into version control along with the code? Or put them somewhere else? Obviously, the test code is useless without the PDFs (or even with different PDFs) but still, putting them into our repository feels wrong. | Your version control system should contain everything it needs to build, compile, test , and package an application for distribution (e.g. MSI, RPM). I would also argue build configurations and other scripts should also be in version control. I should be able to check out a project and have a complete compile, build, and test environment. There are two approaches to checking in test data. First, you can check in the test data itself (PDFs in this case). Second, you can check in source data that can be used to generate test data (if applicable). This could be a SQL script loaded into a blank database containing test data, or maybe a text-based file that can be compiled into a PDF or other file. Others may disagree with checking everything into version control, but I have found in my professional experience it is critical to ensuring a complete environment is able to be rebuilt from scratch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257881",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/150936/"
]
} |
257,885 | Is there a conceivable design pattern for any object-oriented program? I ask this because recently I saw an implementation of a Door class with a Lock . It was part of a test and the answer said that the code is following the Null Object pattern: class Lock
{
public:
virtual void close() = 0;
virtual void open() = 0;
virtual bool is_open() const = 0;
virtual ~Lock() { }
};
class DummyLock
: public Lock
{
private:
DummyLock();
DummyLock(const DummyLock&) = delete;
DummyLock& operator=(const DummyLock&) = delete;
private:
void close() { }
void open() { }
bool is_open() const { return true; }
public:
static DummyLock m_instance;
};
class Door
{
public:
Door() : m_lock(DummyLock::m_instance) { }
Door(Lock &lock) : m_lock(lock) { }
public:
Lock& get_lock() const { return m_lock; }
private:
Lock &m_lock;
}; This made me think: This code follows a good design pattern even though the description is so simple (this class is designing a door class with a lock), so if I am writing more complex code, should there always be some design pattern that I am following? | should there always be some design pattern that I am following? Dear God NO! I mean, you can go ahead and say that any random code is following some random XYZ pattern, but that's no more useful than me claiming to be king of my computer chair. Nobody else really knows what that means and even those that do won't exactly respect my claim. Design patterns are a communication tool so that programmers can tell other programmers what has been done or what should be done without spending a bunch of time repeating themselves. And since they're things that come up a bunch of times, they're useful concepts for programmers to learn "hey, making XYZ always seems to come up because it's good/useful". They do not replace the need for you to think for yourself, to tailor the patterns for the unique problem in front of you, or to handle all of the inevitable things that don't fit into nice buckets. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257885",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/150939/"
]
} |
257,935 | I'm working in a software development team as software developer. I've been working on the same project for three years now. The software is a 32-bit desktop based C# application in .NET 4. Our target platform in Windows 7 (we had to support Windows XP till last year). The software communicates with various custom hardware for which custom drivers are written. The hardware manufacturing and driver software is written by our client. There is different driver for 32-bit and 64-bit Windows of course. During our system testing phase, we execute all/most test cases in both 32-bit and 64-bit Windows 7. I can't recall if we got any bug in our software that exist in only one flavor of Windows. Having this experience I've started to wonder, do we really need to test 32-bit software on 64-bit Windows? What's the industry standard? | Most of the bugs we encountered with running 32-bit software on 64-bit windows had to do with the location of the software ( Program Files (x86) instead of Program Files ), locations of registry keys (some were found in Wow6432Node). We had these problems mostly because we needed to communicate with other software (also 32-bit), and so we needed to test the software on both 32-bit and 64-bit... When you didn't have these problems, I believe it is quite safe not testing on both platforms when you explicitly compile in 32-bit mode. When compiled in 32-bit, the .NET runtime will run everything in 32-bit mode, and it should work the same as the 32-bit mode on 32-bit platforms. According to 64-bit Applications ( MSDN ), 32-bit applications are run in Wow64 mode and Running 32-bit Applications (MSDN) explains this mode in more detail. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/257935",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5176/"
]
} |
258,012 | Many modern languages provide rich exception handling features, but Apple's Swift programming language does not provide an exception handling mechanism . Steeped in exceptions as I am , I'm having trouble wrapping my mind around what this means. Swift has assertions, and of course return values; but I'm having trouble picturing how my exception-based way of thinking maps to a world without exceptions (and, for that matter, why such a world is desirable ). Are there things I can't do in a language like Swift that I could do with exceptions? Do I gain something by losing exceptions? How for example might I best express something like try:
operation_that_can_throw_ioerror()
except IOError:
handle_the_exception_somehow()
else:
# we don't want to catch the IOError if it's raised
another_operation_that_can_throw_ioerror()
finally:
something_we_always_need_to_do() in a language (Swift, for example) that lacks exception handling? | In embedded programming, exceptions were traditionally not allowed, because the overhead of the stack unwinding you have to do was deemed an unacceptable variability when trying to maintain real-time performance. While smartphones could technically be considered real time platforms, they are powerful enough now where the old limitations of embedded systems don't really apply anymore. I just bring it up for the sake of thoroughness. Exceptions are often supported in functional programming languages, but so rarely used that they may as well not be. One reason is lazy evaluation, which is done occasionally even in languages that are not lazy by default. Having a function that executes with a different stack than the place it was queued to execute makes it difficult to determine where to put your exception handler. The other reason is first class functions allow for constructs like options and futures that give you the syntactic benefits of exceptions with more flexibility. In other words, the rest of the language is expressive enough that exceptions don't buy you anything. I'm not familiar with Swift, but the little I've read about its error handling suggests they intended for error handling to follow more functional-style patterns. I've seen code examples with success and failure blocks that look very much like futures. Here's an example using a Future from this Scala tutorial : val f: Future[List[String]] = future {
session.getRecentPosts
}
f onFailure {
case t => println("An error has occured: " + t.getMessage)
}
f onSuccess {
case posts => for (post <- posts) println(post)
} You can see it has roughly the same structure as your example using exceptions. The future block is like a try . The onFailure block is like an exception handler. In Scala, as in most functional languages, Future is implemented completely using the language itself. It doesn't require any special syntax like exceptions do. That means you can define your own similar constructs. Maybe add a timeout block, for example, or logging functionality. Additionally, you can pass the future around, return it from the function, store it in a data structure, or whatever. It's a first-class value. You're not limited like exceptions which must be propagated straight up the stack. Options solve the error handling problem in a slightly different way, which works better for some use cases. You're not stuck with just the one method. Those are the sorts of things you "gain by losing exceptions." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/258012",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28601/"
]
} |
258,189 | If someone forks your repository and commits some changes, what is the accepted way to proceed if you'd like to ask them whether it's alright to pull those changes in? Can you issue a pull request on the forker's behalf and count on GitHub to alert them somehow? If not -- I notice GitHub doesn't support sending users messages; should you somehow contact the user outside of the site? edit - By the way, both the question and answer are obviously different from the Q&A linked as "duplicate" . | If you want to message them via GitHub, why not use Mention Notifications ? Open an issue on your own repository and mention the forker in that issue. The issue should be relevant to the stuff you want to pull, so you can discuss the pull request they need to send. Something like "@JohnSmith has already implemented this feature - can you please make a pull request?". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/258189",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29448/"
]
} |
258,209 | I'm not a Fortran developer myself, but I'm about to use it a little and found myself wondering why, if it is much older than C but equally as performant as C, was it never used to develop any operating system before C and UNIX came along? A substitute answer, if the above is invalid, might be which operating systems were developed in Fortran. But still, it didn't seem to catch on at all. | I'd say that Fortran, even of pre-C times, abstracts the programmer from hardware details too much. No pointer support. If you want to pass large amounts of data between subroutines, you use a COMMON block, and you don't control its allocation. Pointer arithmetic and structure allocation control are hard to non-existent. Data types are numeric-oriented. Referring to a particular byte is a bit hard, let alone bits. I/O is provided by language statements, not by subroutines. You depend on the compiler's runtime for it, and cannot roll your own. This is off the top of my head; last time I wrote Fortran-IV code was ~25 years ago. Possibly you could alter a Fortran compiler to introduce the missing capabilities. But building a special-purpose 'portable assembly' language like C proved to be easier and more effective. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/258209",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/132708/"
]
} |
258,318 | I'm speaking specifically about a C# .NET 4 program running on Windows XP or higher, but general answers are also acceptable. Assume an already optimized and efficient program. The problem here is entirely down to effects of high CPU usage on hardware, and whether a high-usage program should be throttled to reduce wear, not on whether my implementation is efficient. A colleague today suggested that I should not aim for 100% CPU utilization on my data load processes because "modern CPUs are cheap and will degrade quickly at 100% CPU". Is this true? And if so, why? I was previously under the impression that 100% CPU usage was preferable for an intensive or long operation, and I couldn't find any respectable sources on the subject either way. | If cooling is insufficient, the CPU might overheat. But they all (well, at least all modern PC CPUs) feature various thermal protection mechanisms which will throttle the clock speed or, as a final resort, shut down. So yes, on a dusty laptop, 100 % CPU load could cause temporary problems, but nothing will break or "degrade" (whatever that means). For CPU bound problems, 100 % CPU load is the right way to go. As for application (UI) responsiveness, that's a separate concept from CPU utilization. It's entirely possible to have an unresponsive application that uses 1 % CPU, or responsive application that uses 100 % CPU. UI responsiveness boils down to amount of work done in the UI thread, and the priority of UI thread vs. other threads. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/258318",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91018/"
]
} |
258,431 | What does exactly make reading from the process memory a pure operation? Suppose I created an array of 100 integers in the global memory and then took the 42th element of this array. It is not a side effect, right? So why is reading the same array of 100 integers from a file a side-effect? | If the memory you access can change, then it is indeed a side effect. For example, in Haskell, the function to access a mutable array ( IOArray ) has type Ix i => IOArray i e -> i -> IO e (slightly simplified for our purposes). While accessing an immutable array has type Ix i => Array i e -> i -> e The first version returns something of type IO e which means it has I/O side effects. The second version simply returns an element of type e without any side effects. In case of accessing a file, you simply cannot know at compile time whether the file will ever change during a run of the program. Therefore, you have to always treat it as an operation with potential side effects. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/258431",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/98954/"
]
} |
258,479 | I am looking for a methodology for choosing a language. I am not asking for opinions about languages. I have been tasked with the process of comparing our shop's current language with others that are available. We are a web development shop btw. Our CEO would like a full white paper about all web based languages that are available, What parent language they are derivatives of (e.g jsp is from java which is from c/c++). I need to create a matrix with all the key factors of a particular language as well and the short comings of that given language. Is the language limited by platform, is it designed for functional programming, procedural or OO or can it be used with any programming paradigm? I also need to have information less technical, like the size of the talent pool for a given language and the median salary in that pool. How will the marketplace view our choice? We started looking for a consultant to help us understand all of these things but what we have found it that most consultants are coming from a development background and often it seems that the answer is " xxx is the best language because it is the one that I have used the most over the last n years and it has never let me down. You could Supplement it with yyy for front end and use zzz library" I am feeling overwhelmed with this task and I feel like the best course of action, given what our CEO is looking for, is to look in the world of academia and hire a professor with no actual development experience to come in and "teach" us about all possible languages. Has anyone else had to go through this exercise? If you have can you share the steps and/or methodology you used to go through the process? | @FrustratedWithFormsDesigner hinted at this above, I'll be more blunt: you've been charged with a expensive but useless task. I suspect the CEO is looking for irrefutable, objective evidence which will support his choice of language. The problem is that preference of language is loaded with far too many subjective and extrinsic factors for the white-paper to be meaningful let alone useful. Put another way, if there were an ideal language, everyone would be using it instead of those "objectively" defective languages. It also bespeaks a degree of micromanagement that ought be the province of the engineers who will have to make it work. Erlang might be the "objectively best" but if no one knows it, add a 6 month/engineer start up cost and 6 more month / engineer to gain competency. Since I don't have your job, I'm not worried about losing it, although you may. I'd give the CEO a paper on the Physical Church-Turing Thesis. I'd then get together with the senior engineers and get their non-rigorous, non-objective opinion as to what should be used and tell the CEO that's what you shall use. In exchange you'll promise the engineers will keep out of board meetings, CFOs, Accountancy Methods, selection of VPs and so on. There is a reason we specialize, and he has no more place in engineering preferences than you do in his domain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/258479",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/80640/"
]
} |
258,509 | I have the follow algorithm which finds duplicates and removes them: public static int numDuplicatesB(int[] arr) {
Sort.mergesort(arr);
int numDups = 0;
for (int i = 1; i < arr.length; i++) {
if (arr[i] == arr[i - 1]) {
numDups++;
} }
return numDups;
} I am trying to find the worst case time complexity of this. I know mergesort is nlog(n) , and in my for loop I am iterating over the entire data set so that would count as n .
I am unsure what to do with these numbers though. Should I just sum them together? If I were to do that, how would I do it? | O(n) + O(n log(n)) = O(n log(n)) For Big O complexity, all you care about is the dominant term. n log(n) dominates n so that's the only term that you care about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/258509",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/148476/"
]
} |
259,838 | I am developing a typical Web Application with the following layers UI Layer (MVC) Business Logic Layer (BAL) Data Access Layer (DAL) Each layer has its own DTO object including the BAL and DAL. My questions regarding this are as follows The DTO returned by the DAL is simply converted to the corresponding DTO in the BAL and sent to the UI Layer. Both attributes and the structure of the DTO objects are the same in some cases. In such scenarios is it better to simply return the DTO in the DAL to the UI layer without including an intermediate object. What is the best way to name these DTO objects and other objects in each layer. Should I use some prefix such as DTOName, ServiceName? The reason why I am asking to use a prefix is because if not the classes in my Solution clashes with other classes in the Framework and with a prefix it is easier for me to understand where each class belongs? | Preface Hopefully this is obvious, but... in the suggested namespaces below, you would replace MyCompany and MyProject with the actual names of your company and project. DTOs I would recommend using the same DTO classes across all layers. Fewer points of maintenance that way. I usually put them under a MyCompany.MyProject.Models namespace, in their own VS project of the same name. And I usually name them simply after the real-world entity that they represent. (Ideally, the database tables use the same names too, but sometimes it makes sense to set up the schema a bit differently there.) Examples: Person , Address , Product Dependencies: None (other than standard .NET or helper libraries) DAL My personal preference here is to use a one-for-one set of DAL classes matching the DTO classes, but in a MyCompany.MyProject.DataAccess namespace/project. Class names here end with an Engine suffix in order to avoid conflicts. (If you don't like that term, then a DataAccess suffix would work fine too. Just be consistent with whatever you choose.) Each class provides simple CRUD options hitting the database, using the DTO classes for most input parameters and return types (inside a generic List when there are more than one, e.g., the return from a Find() method). Examples: PersonEngine , AddressEngine , ProductEngine Dependencies: MyCompany.MyProject.Models BAL/BLL Also a one-for-one mapping here, but in a MyCompany.MyProject.Logic namespace/project, and with classes getting a Logic suffix. This should be the only layer that calls the DAL! Classes here are quite often just a simple pass-through to the DAL, but if & when business rules need to be implemented, this is the place for it. Examples: PersonLogic , AddressLogic , ProductLogic Dependencies: MyCompany.MyProject.Models , MyCompany.MyProject.DataAccess API If there's a web services API layer, I use the same one-for-one approach, but in a MyCompany.MyProject.WebApi namespace/project, with Services as the class suffix. (Unless you're using ASP.NET Web API, in which case you would of course use the Controller suffix instead). Examples: PersonServices , AddressServices , ProductServices Dependencies: MyCompany.MyProject.Models , MyCompany.MyProject.Logic (never bypass this by calling the DAL directly!) An Observation on Business Logic It seems to be increasingly more common for people to leave out the BAL/BLL, and instead implement business logic in one or more of the other layers, wherever it makes the most sense. If you do this, just be absolutely certain that (1) all application code goes through the layer(s) with the business logic, and (2) it's obvious and/or well-documented where each particular business rule has been implemented. If in doubt, don't try this at home. A Final Note on Enterprise-Level Architecture If you're in a large company, or other situation where the same database tables get shared across multiple applications, then I would recommend leaving the MyProject portion out of the above namespaces/projects. That way those layers can be shared by multiple front-end applications (and also behind-the-scenes utilities like Windows Services). But only do this if you have strong cross-team communication and thorough automated regressing testing in place!!! Otherwise, one team's changes to a shared core component are likely to break another team's application. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/259838",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/152866/"
]
} |
259,909 | The below interface inheritance is illegal in PHP, but I think it would be fairly useful in real life. Is there an actual antipattern or documented problem with the below design, that PHP is protecting me from? <?php
/**
* Marker interface
*/
interface IConfig {}
/**
* An api sdk tool
*/
interface IApi
{
public __construct(IConfig $cfg);
}
/**
* Api configuration specific to http
*/
interface IHttpConfig extends IConfig
{
public getSomeNiceHttpSpecificFeature();
}
/**
* Illegal, but would be really nice to have.
* Is this not allowed by design?
*/
interface IHttpApi extends IApi
{
/**
* This constructor must have -exactly- the same
* signature as IApi, even though its first argument
* is a subtype of the parent interface's required
* constructor parameter.
*/
public __construct(IHttpConfig $cfg);
} | Let's ignore for a second that the method in question is __construct and call it frobnicate . Now suppose you have an object api implementing IHttpApi , and an object config implementing IHttpConfig . Clearly, this code fits the interface: $api->frobnicate($config) But let's suppose we upcast api to IApi , for example passing it to function frobnicateTwice(IApi $api) . Now in that function, frobnicate is called, and since it only deals with IApi , it may perform a call such as $api->frobnicate(new SpecificConfig(...)) where SpecificConfig implements IConfig but not IHttpConfig . At no point anyone did anything unsavory with types, yet IHttpApi::frobnicate got a SpecificConfig where it expected a IHttpConfig . This is no good. We don't want to prohibit upcasting, we want subtyping, and we clearly want multiple classes implementing an interface. So the only sensible option is to prohibit a subtype method requiring more specific types for parameters. (A similar problem occurs when you want to return a more general type.) Formally, you've walked into a classic trap surrounding polymorphism, variance . Not all occurrences of a type T can be replaced by a subtype U . Conversely, not all occurrences of a type T can be replaces by a supertype S . Careful consideration (or better yet, strict application of type theory) is necessary. Coming back to __construct : Since AFAIK you can't exactly instantiate an interface, only a concrete implementer, this may seem like a pointless restriction (it's never going to get called through an interface). But in that case, why include __construct in the interface to begin with? Regardless, it would be of little use to special-case __construct here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/259909",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19033/"
]
} |
259,930 | As a junior developer, I'm working in a company that develops software for the airline industry. We have a test team, so I don't have any motivation to learn testing software. My friend is working for a small company as an back-end developer. Their team doesn't have any specific test team, and they do their tests on their own. Should a back-end developer learn about testing software? | Absolutely and unequivocally: yes! It's a core skill which you will be expected to have at a large percentage of companies you'll want to work for in the future. As a developer, the technical aspects of testing are more interesting than than the methodological ones: learn using a unit testing framework, set up automated testing, try doing test-driven development to see how you like it. If you want to specialize in it, performance/stress testing and security/penetration testing are quite sought-after skills. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/259930",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147535/"
]
} |
260,109 | Conventional parsers consume their entire input and produce a single parse tree. I'm looking for one that consumes a continuous stream and produces a parse forest [ edit: see discussion in comments regarding why this use of that term may be unconventional ]. My gut says that I can't be the first person to need (or think I need) such a parser, but I've searched off and on for months to no avail. I recognize that I may be ensnared by the XY problem. My ultimate purpose is to parse a stream of text, ignoring most of it, and produce a stream of parse trees from the sections that are recognized. So my question is conditional: if a class of parsers with these characteristics exists, what is it called? And if not, why not? What is the alternative? Perhaps I'm missing some way I can make conventional parsers do what I want. | A parser that returns a (partial) result before the whole input has been consumed is called an incremental parser . Incremental parsing can be difficult if there are local ambiguities in a grammar that are only decided later in the input. Another difficulty is feigning those parts of the parse tree that haven't been reached yet. A parser that returns a forest of all possible parse trees – that is, returns a parse tree for each possible derivation of an ambiguous grammar – is called … I'm not sure if these things have a name yet. I know that the Marpa parser generator is capable of this, but any Earley or GLR based parser should be able to pull this off. However, you don't seem to want any of that. You have a stream with multiple embedded documents, with garbage in between: garbagegarbage{key:42}garbagegarbage[1,2,3]{id:0}garbage... You seem to want a parser that skips over the garbage, and (lazily) yields a sequence of ASTs for each document. This could be considered to be an incremental parser in its most general sense. But you'd actually implement a loop like this: while stream is not empty:
try:
yield parse_document(stream at current position)
except:
advance position in stream by 1 character or token The parse_docment function would then be a conventional, non-incremental parser. There is a minor difficulty of ensuring that you have read enough of the input stream for a successful parse. How this can be handled depends on the type of parser you are using. Possibilities include growing a buffer on certain parse errors, or using lazy tokenization. Lazy tokenization is probably the most elegant solution due to your input stream. Instead of having a lexer phase produce a fixed list of tokens, the parser would lazily request the next token from a lexer callback [1] . The lexer would then consume as much of the stream as needed. This way, the parser can only fail when the real end of the stream is reached, or when a real parse error occurred (i.e. we started parsing while still in garbage). [1] a callback-driven lexer is a good idea in other contexts as well, because this can avoid some problems with longest-token matching . If you know what kind of documents you are searching for, you can optimize the skipping to stop only at promising locations. E.g. a JSON document always begins with the character { or [ . Therefore, garbage is any string that does not contain these characters. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260109",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/145776/"
]
} |
260,183 | I am learning TDD using c#, as far as I know test should drive the development , that is first write a failing test after write the bare minimum code to pass the test then do refactoring. But it is also said that " Program to Interface, not Implementation ", so write an interface first . This is where my confusion starts, If I am writing Interface first then it is violating two things The code that is written for interface is not driven by test . It is not the bare minimum obviously i can write it with a simple class. Should I start by writing tests for interface also ? without any implementation what am I going to test ? If this question sounds silly sorry for that, but I am utterly confused. May be I am taking things too literally. | Your first violation ("The code that is written for interface is not driven by test.") is not valid. Let's use a trivial example. Suppose you're writing a calculator class, and you're writing an addition operation. What test might you write? public class CalculatorTest {
@Test
public void testAddTwoIntegers() {
Calculator calc = new Calculator();
int result = calc.add(2, 2)
Assert.assertEquals(4, result);
}
} Your test has just defined the interface. It is the add method, see? add takes two arguments and returns their sum. You might later determine that you need multiple Calculators, and extract an (in this case) Java interface at that time. Your tests shouldn't change then, since you tested the public interface of that class. At a more theoretical level, tests are the executable specification for a system. Interfaces to a system should be driven by the users of that system, and tests are the first method you have to define interactions. I don't think you can separate interface design from test design. Defining interactions and designing tests for them are the same mental operation - when I send this information into an interface, I expect a certain result. When something is wrong with my input, I expect this error. You can do this design work on paper and then write your tests from that, or you can do them at the same time - it doesn't really matter. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260183",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147058/"
]
} |
260,260 | It's commonly accepted that Java generics failed in some important ways. The combination of wildcards and bounds led to some seriously unreadable code. However, when I look at other languages, I really can't seem to find a generic type system that programmers are happy with. If we take the following as design goals of a such a type system: Always produces easy-to-read type declarations Easy to learn (no need to brush up on covariance, contravariance, etc.) maximizes the number of compile-time errors Is there any language that got it right? If I google, the only thing I see is complaints about how the type system sucks in language X. Is this kind of complexity inherent in generic typing? Should we just give up on trying to verify type safety 100% at compile time? My main question is which is the language that "got it right" the best with respect to these three goals. I realize that that's subjective, but so far I can't even find one language where not all it's programmers agree that the generic type system is a mess. Addendum: as noted, the combination of subtyping/inheritance and generics is what creates the complexity, so I'm really looking for a language that combines both and avoids the explosion of complexity. | While Generics have been mainstream in the functional programming community for decades, adding generics to object oriented programming languages offers some unique challenges, specifically the interaction of subtyping and generics. However, even if we focus on object oriented programming languages, and Java in particular, a far better generics system could have been designed: Generic types should be admissible wherever other types are. In particular, if T is a type parameter, the following expressions should compile without warnings: object instanceof T;
T t = (T) object;
T[] array = new T[1]; Yes, this requires generics to be reified, just like every other type in the language. Covariance and contravariance of a generic type should be specified in (or inferred from) its declaration, rather than every time the generic type is used, so we can write Future<Provider<Integer>> s;
Future<Provider<Number>> o = s; rather than Future<? extends Provider<Integer>> s;
Future<? extends Provider<? extends Number>> o = s; As generic types can get rather long, we should not need to specify them redundantly. That is, we should be able to write Map<String, Map<String, List<LanguageDesigner>>> map;
for (var e : map.values()) {
for (var list : e.values()) {
for (var person : list) {
greet(person);
}
}
} rather than Map<String, Map<String, List<LanguageDesigner>>> map;
for (Map<String, List<LanguageDesigner>> e : map.values()) {
for (List<LanguageDesigner> list : e.values()) {
for (LanguageDesigner person : list) {
greet(person);
}
}
} Any type should be admissible as a type parameter, not just reference types. (If we can have an int[] , why can we not have a List<int> )? All of this is possible in C#. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260260",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/99583/"
]
} |
260,272 | Our company has been using newer versions of the Dojo framework, which have progressed to an AMD-based loader format. I'm currently trying to find logical ways to separate layer files, taking a module and all its dependencies, and wrapping them all into single minified files. My goals are generally as follows: Minimize the number of separate requests made per-page When possible, make use of cache so that large modules that have already been used are re-retrieved. Our app consists of many hundreds of pages and dialogs, much of which have their own custom logic and related scripts. I'm starting to run into more and more complex issues in terms of finding the right way to construct layer files for the best performance. Just recently, I ran into the following type of scenario: A depends on B, depends on C, depends on D ...etc... depends on Z. A1 depends on D (which depends on ... Z) A2 depends on L (which depends on ... Z) The scenario is mostly just an example - I don't have 24 dependencies in a straight tree. But under our previous method of simply declaring all objects globally under a namespace, this would be a simple matter of minifying everything to one layer file that's included on relevant pages. As it stands now, I'm not sure what I'd gain by making D its own layer, or A, or L. Just to avoid loading the same code twice, we'd be separating it into about 3 or 4 requests. One troublesome fact is the way that the page's script will always immediately put out new script requests for anything in its "require()" block that it hasn't cached just yet. Moreover, it's become very complicated just to comprehend and discuss. I wanted to know how other people tend to approach this problem, hopefully in a simpler way; the previous version of our product had some significant javascript loading-time issues, so this certainly isn't premature optimization (it could be "past-mature"). We are starting to switch our pages to using asynchronously-loaded script tags now that most of our scripts support it, in case that's any help. | While Generics have been mainstream in the functional programming community for decades, adding generics to object oriented programming languages offers some unique challenges, specifically the interaction of subtyping and generics. However, even if we focus on object oriented programming languages, and Java in particular, a far better generics system could have been designed: Generic types should be admissible wherever other types are. In particular, if T is a type parameter, the following expressions should compile without warnings: object instanceof T;
T t = (T) object;
T[] array = new T[1]; Yes, this requires generics to be reified, just like every other type in the language. Covariance and contravariance of a generic type should be specified in (or inferred from) its declaration, rather than every time the generic type is used, so we can write Future<Provider<Integer>> s;
Future<Provider<Number>> o = s; rather than Future<? extends Provider<Integer>> s;
Future<? extends Provider<? extends Number>> o = s; As generic types can get rather long, we should not need to specify them redundantly. That is, we should be able to write Map<String, Map<String, List<LanguageDesigner>>> map;
for (var e : map.values()) {
for (var list : e.values()) {
for (var person : list) {
greet(person);
}
}
} rather than Map<String, Map<String, List<LanguageDesigner>>> map;
for (Map<String, List<LanguageDesigner>> e : map.values()) {
for (List<LanguageDesigner> list : e.values()) {
for (LanguageDesigner person : list) {
greet(person);
}
}
} Any type should be admissible as a type parameter, not just reference types. (If we can have an int[] , why can we not have a List<int> )? All of this is possible in C#. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260272",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90889/"
]
} |
260,339 | I am starting my first year (in college) in Computer Science next year and I write mostly in C (if that is to matter). I have tried searching but most of what I find assumes knowledge of lambda calculus. Why is lambda calculus considered so much more useful than single variable calculus in prograrmming? Is there a relationship between lambda expressions and functional programs? Was it Alonzo Church's work on lambda calculus that influenced the development of programming languages? Everyone outside school keeps buzzing about it and I have no clue what they will be talking about even though I am eager to learn it and see how it directly relates to my programming and understanding of programming languages. | The Lambda Calculus is interesting, elegant, and makes it much easier to understand functional programming languages. However, you won't encounter the LC in a typical CS Bachelor course, so you don't have to learn it right now – I would recommend to experiment with functional languages first before revisiting the Lambda Calculus. I believe OCaml is a good starting point into functional programming for a C programmer, and that Scheme is a good starting point to dive into the Lambda Calculus. The Lambda Calculus is not associated with Calculus (which ought to be called Analysis instead). In general, a calculus is a “formal system”, i.e. a set of rules to do something. While Differential Calculus provides rules regarding the change of values, the rules of the Lambda Calculus describe computation itself. From this set of very basic rules, we can build arbitrary computations, data representations such as booleans, integers, or lists, and even control flow constructs such as conditionals or loops. The LC is equivalent to Turing Machines, but either model has different strengths. Lambda Calculus had an immense impact on programming languages. The second high-level language to be implemented was Lisp, which can be understood as a direct encoding of the LC into a programming language. This “functional programming” has immense effect on the evolution of programming languages. Features such as anonymous functions, function pointers, closures (nested functions), garbage collection, variable scope, metaprogramming, advances in type systems, type inference, interpreted languages, dynamically-typed languages, object-oriented programming are all owed to a large part to the functional programming branch of programming languages. There's a joke that any new (non-academic) programming language only adds features that Lisp has already had for decades. Beyond that, the Lambda Calculus and other related calculi are indispensable tools in programming language theory and in certain compiler construction techniques. Any language that has anonymous functions which behave as closures and can be passed around freely immediately contains an encoding of the lambda calculus. Anonymous functions correspond a lambda expressions, except that in the LC functions always have exactly one argument. However, any Turing-complete language is equivalent to the LC, so the LC can always be implemented on top of such languages. This tends to happen in rule-matching systems or overly intelligent configuration formats, giving rise to “Greenspun's tenth rule” (in jest – mostly): “ Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. ” | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260339",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153431/"
]
} |
260,343 | I keep hearing the phrase "Favour Composition over Inheritance" from GoF, which is being annoyingly mentioned repeatedly by my friend, who thinks it is a valid blanket statement. But is it not more sensible to consider that the real reason why inheritance exists is not just for behaviour reuse, but for the case where a hierarchy of classes inheriting from the same ancestor need to be passed as arguments to a function? That is: to implicitly implement polymorphism, and these descendent classes have methods which are polymorphisms of a funtion in the ancestral class. Use of inheritance in this manner is the sole reason why herds and fleets of programmers find it pleasurable to write UI with Microsoft's WPF with (Visual) C#. How would composition fit nicely into the picture of UI framework design, without being too verbose? I know compatibility clashes may arise from the usage of inheritance, which is one of the things the statement aims to protect one from, but is it not a question of bad design on the programmer's part (failure to discern when to inherit and when to compose) rather than on the paradigm or language? P.S.: I used the terms ancestor and descendent to avoid implying inheritance depth through the usage of parent and child. | You ask why, as if the whether is decided. Lets consider your supposition: Why is inheritance generally viewed as a bad thing by OOP proponents I disagree with the premise of the question. Inheritance isn't generally viewed as bad, it is viewed as misused and overused. GoF Design Patterns says no such thing about it being bad. Let's see what GoF Design Patterns actually says... p20 - In the discussion of Favor composition over inheritance , that Inheritance and object composition thus work together . p22 - Object composition lets you change the behavior being composed at run-time, but it also requires indirection and can be less efficient. p25 - ... on the other hand, heavy use of object composition can make designs harder to understand. And so on. Now, there are also plenty of downsides to inheritance that I could quote, but the point is, GoF doesn't claim inheritance is bad, and clearly teaches the pros and cons of various techniques. What does favour mean in this context? Don't make inheritance your first choice. Kinda like don't make the crowbar your first choice, try the key first. As far as inheritance, there are two major types, Class inheritance and Interface inheritance (subtyping). Both are appropriate for particular patterns but the latter has come into favor since GoF was published. If you were learning OOP back in the early 90s, you would know about Booch Method and Rumbaugh Method, but nowadays you hear little of those. Yet they were teaching a generation of C++ (and Java) programmers to inherit and override. But state of the art has moved on. We have, through collective experience, found that loose coupling is really found through use of interfaces and dependency injection, and not merely by using encapsulation. The problem with inheritance is too many "OO" programmers think of problems first as potential inheritance solutions. Inheritance is a concrete concept that people understand quickly early on, so many latch onto that and never really grow much more as system modelers. When everything in a system is wedged into an inheritance hierarchy, and code is reused by class inheritance, it makes for some disastrous results and the benefits of OO are outweighed by actually wrestling with the hierarchy itself. I've seen many projects, and been responsible for some myself, that spent more time wrestling a huge hierarchy than actually maintaining the sub-system interfaces. This is a sign of inheritance misuse. But that aside, inheritance is invaluable for accomplishing many good things. Polymorphism (specifically sub-type polymorphism) is a very powerful tool and single and double dispatch are key for patterns such as Visitor pattern. Interface inheritance is key to designing to interfaces, not implementations. Abstract classes or interfaces are only useful with inheritance. If your friend thinks that "favour composition over inheritance" is a mantra for avoiding inheritance altogether, he is mistaken and doesn't understand the concept of a complete toolset. Sounds as if he hasn't actually read GoF. If he did claim to read it, then it sounds as if he misunderstood it and should read it again. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260343",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153431/"
]
} |
260,347 | I translated a GPLv2 C program to Python , but found it was hard to extend as designed and rewrote significant portions of it. The program is now structurally completely different, but there are several verbatim translated functions in use. The Ship of Theseus Paradox (as stated from Wikipedia) "raises the question of whether an object which has had all its components replaced remains fundamentally the same object." If I did manage to write replacements for the verbatim functions, would I be able to relicense to a license I prefer? Related, would I be able to pull the evolved architecture out and reuse it with a different license? I think it would be very useful on its own, but do not like the idea that it is now "tainted" with the GPL license. Followup : I decided to contact the copyright holder and received permission to relicense . Sometimes the best way is to interact socially rather than programmatically! | First, the answer is no (for a translation), you cannot legally relicense it or do anything outside of the original license legalities. You may very well have done 10 times the work of the original author, but it doesn't matter, it is viral. Not just because it is GPL, but because it isn't clean design or rewrite. I struggled briefly with this in 1992 when I had done massive rewrite of an old MUD codebase. We had a successful game, but wanted to do our own thing, and people were willing to pay for it, yet the DikuMUD license strictly forbid us to make money. A competitor, at the time, also had based theirs on the same codebase, and they opted to blatantly ignore the copyright, rip out all traces of it, and basically lie to everyone including themselves. Their logic was "none of the original code exists" and "we have done massive rewrites and improvement" and generally ignoring the fact that they started with 20,000 lines of code. They were charging for items in the game, and making too much money to stop. I was admittedly envious. But I researched copyright law, and consulted my conscience, and decided I could not even use the code I had written because I honestly did not architect the game server from scratch. So I decided to put my money where my mouth was and write from scratch, with a copy of W. Richard Steven's UNIX Network Programming with me at all times, I started. Writing from scratch, my way, taught me so much more than when I had rewritten DikuMUD, and it also taught me that I didn't really understand what it meant to stand on someone else's shoulders. Within six months I had 50,000 lines of operational code that I could call mine. I named it MUD++ and released it under BSD. Badly written in early style C++, it was still the first free, open source C++ MUD that I am aware of. To this day nobody can take it away from me. I had the best TCP server at the time, nobody else could do a "hot reboot" without dropping players, and soon everyone was stealing the feature ( and I've noted many GPL MUDs have snippets of my BSD code -- always interesting how GPL can hijack BSD-ware but not vice-versa ). Eventually, I moved on, so it wasn't like the decision was a make or break for my fortune, but while the other guys made a lot of money for a while, last I looked they had dwindled, in a world of graphical games there isn't much mass demand for text anymore. The story doesn't end... a few years later, I was working for IBM and Disney hired us to write a realtime 3D multiplayer game for Epcot center, and I was able to use the TCP core from MUD++ as a base for that game server! Had I not owned my own code, I wouldn't have been allowed to use it, and it honestly saved me weeks of coding time. In the end, I am proud of the choices I made and I have a story to tell my kids. People understate and underestimate the benefit of starting with someone else's framework to build on. If you think you "own" it, test yourself. Start over, with a Python book beside you. See how it feels. Don't cheat and don't look at the old codebase. Look at the output. Force yourself to think through every aspect on your own, doing the honest research. You'll be better for it, and likely have a better product. Before you do that, though, try to contact the original author. Ask them if they would be willing to relicense. If you plan to sell binaries, offer royalties. Many authors who released things GPL in the 90s and 2000s, are now in their 30s, 40s and 50s and understand what it means to make a living at software. I've seen more than one relicense their stuff from GPL to MIT, Apache, Boost or BSD. Lastly, a license doesn't override prior rights to code you may have. Or if you wrote a clean add-on independently, for example, if you wrote a TCP engine as an add-on to a single player Tetris game, and it can cleanly stand alone (especially if you previously released under another license) then you can reuse your code in other projects. You have authorship rights too. My belief is free is FREE. If you gotta attach strings, don't call it free. Someone mailed me years later and said that they had used my game in a commercial engine, mainly the TCP and possibly the bytecode interpreter. They were making money. I didn't mind one bit. I was happy as I still am now, as a proud father. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260347",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153438/"
]
} |
260,409 | I have a project that I'm working on currently using Tomcat, Spring 4, Spring Security, MySQL, and JPA w/ Hibernate. I picked JPA from the standpoint that it's suppose to make swapping out the underlying implementation of ORM providers seamless, or at least less painful. I would say that this is mentally using the spec over the implementation (JAX-RS) is the default standpoint of the Java development community. I'm curious if this is really a task worth doing. I'm sure if I used Hibernate directly I would gain some power because I could use features that are not part of the main JPA specification. Part of my concern comes from the idea of YAGNI. I'm essentially programming in a specific style and fashion (using JPA instead of Hibernate) so that at some point in the future I can swap out my ORM implementation. I severely doubt that will ever happen over the life of the product, so I'm essentially putting effort into something that I'll probably never reap the benefits of. What are your thoughts? Is "programming to the interface" worth it when it comes to stuff like JPA? Have you ever actually swapped an entire ORM implementation in a product? Have you ever been able to completely avoid the abstraction from something like JPA leaking anyways? I personally have a single native SQL call already (to clean out the database tables), and there's somethings I'd like to futz with that are built into the JPA spec (get/set prefixes to your methods, and the difference between MEMBER OF/IN, which only binding myself to an underlying implementation will get me any chance of avoiding. | so that at some point in the future I can swap out my ORM
implementation. I severely doubt that will ever happen over the life
of the product, so I'm essentially putting effort into something that
I'll probably never reap the benefits of In my experience, the typical enterprise project never swaps out: Database ORM Language I don't know the numbers, but the percentage of 2-tier apps I've seen implement and then change one of those things would probably be less than 10%. If it happens, it is rare, and it usually happens within the first 2-3 months of the project when people are still in discovery mode. Most systems I know are still running on their original platforms 8-10 years later. More often than swapping out an ORM, do you want to know what I've seen more of? Removing the ORM completely to drop down to SQL due to performance issues. So no matter what, in that case, your gonna be rewriting stuff. As far as implementation agnostic, its a myth. It really amounts to dumbing down. So my approach is to embrace the data layer. Make it a first class part of your language and design. It makes for happier, more successful projects. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260409",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29912/"
]
} |
260,566 | While investigating the accuracy of floating point numbers, I've seen in a few places a statement similar to " float and double are ( designed for / used often in ) engineering and scientific calculation " From my understanding, the strength of floats and doubles are the amount of memory they use for their (good, but not perfect) precision. I feel like I'm almost getting an understanding from this answer "floating point numbers let you model continuous quantities" I still am not convinced I understand. Engineering and Science both sound like fields where you would want precise results from your calculations, which, from my understanding, floating points do not give. I'm also not sure I follow what a "continuous quantity" is, exactly. Can somebody expand on this explanation and perhaps give an example? | Computation in science and engineering requires tradeoffs in precision, range, and speed. Fixed point arithmetic provides precision, and decent speed, but it sacrifices range. BigNum, arbitrary precision libraries, win on range and precision, but lose on speed. The crux of the matter is that most scientific and engineering calculations need high speed, and huge range, but have relatively modest needs for precision. The most well determined physical constant is only known to about 13 digits, and many values are known with far less certainty. Having more than 13 digits of precision on the computer isn't going to help that. The fly in the ointment is that sequences of floating point operations can gradually lose precision. The bread and butter of numerical analysis is figuring out which problems are particularly susceptible to this, and figuring out clever ways of rearranging the sequence of operations to reduce the problem. An exception to this is number theory in mathematics which needs to perform arithmetic operations on numbers with millions of digits but with absolute precision. Numerical number theorists often use BigNum libraries, and they put up with their calculations taking a long time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260566",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/128417/"
]
} |
260,710 | I was reading an article about bad programming practices . It mentioned - "Yo-Yo code" that converts a value into a different representation, then converts it back to where it started (eg: converting a decimal into a string and then back into a decimal, or padding a string and then trimming it) I don't understand why the particular example he gives is a bad way to write programs. It seems okay to me to convert back if the situation requires it so that the value can be used. Can anyone explain more about this? | Even if you do need both the numeric and the string representation of a number, it's better to convert just once and also hang on to the original value, instead of converting again every time you need one or the other. The principle is, as always, that code that doesn't exist cannot have subtle defects , while code that exists often does. That may sound paranoid, but experience teaches us that it's appropriate. If you approach programming with a permanent light anxiety of "I'm not really smart enough to understand this complex system", you're on the right track. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260710",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/69069/"
]
} |
260,778 | From the technical point of view it is possible to add some pre/post push hooks which will run unit tests before allowing some specific commit to be merged to remote default branch. My question is - is it better to keep unit tests in build pipeline (thus, introducing broken commits to repo) or it's better just not to allow "bad" commits to happen. I do realize that I'm not limited with this two options. For instance, I can allow all commits to branched and tests before pushing merge commit to repo. But if you have to choose between exactly this two solutions, which one you'll choose and for what exactly reasons? | No, it's not, for two reasons: Speed Commits should be fast. A commit which takes 500 ms., for example, is too slow and will encourage developers to commit more sparingly. Given that on any project larger than a Hello World, you'll have dozens or hundreds of tests, it will take too much time to run them during pre-commit. Of course, things get worse for larger projects with thousands of tests which run for minutes on a distributed architecture, or weeks or months on a single machine. The worst part is that there is not much you can do to make it faster. Small Python projects which have, say, hundred unit tests, take at least a second to run on an average server, but often much longer. For a C# application, it will average four-five seconds, because of the compile time. From that point, you can either pay extra $10 000 for a better server which will reduce the time, but not by much, or run tests on multiple servers, which will only slow things down. Both pay well when you have thousands of tests (as well as functional, system and integration tests), allowing to run them in a matter of minutes instead of weeks, but this won't help you for small scale projects. What you can do, instead, is to: Encourage developers to run tests strongly related to the code they modified locally before doing a commit. They possibly can't run thousands of unit tests, but they can run five-ten of them. Make sure that finding relevant tests and running them is actually easy (and fast). Visual Studio, for example, is able to detect which tests may be affected by changes done since the last run. Other IDEs/platforms/languages/frameworks may have similar functionality. Keep the commit as fast as possible. Enforcing style rules is OK, because often, it's the only place to do it, and because such checks are often amazingly fast. Doing static analysis is OK as soon as it stays fast, which is rarely the case. Running unit tests is not OK. Run unit tests on your Continuous Integration server. Make sure developers are informed automatically when they broke the build (or when unit tests failed, which is practically the same thing if you consider a compiler as a tool which checks some of the possible mistakes you can introduce into your code). For example, going to a web page to check the last builds is not a solution. They should be informed automatically . Showing a popup or sending an SMS are two examples of how they may be informed. Make sure developers understand that breaking the build (or failing regression tests) is not OK, and that as soon as it happens, their top priority is to fix it. It doesn't matter whether they are working on a high-priority feature that their boss asked to ship for tomorrow: they failed the build, they should fix it. Security The server which hosts the repository shouldn't run custom code, such as unit tests, especially for security reasons. Those reasons were already explained in CI runner on same server of GitLab? If, on the other hand, your idea is to launch a process on the build server from the pre-commit hook, then it will slow down even more the commits. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260778",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34613/"
]
} |
260,818 | I was wondering about this. Suppose I have a user resource with id and name fields.
If I want to update a field I could just do a PATCH request to the resource like this PATCH /users/42
{"name": "john doe"} And then the application will update user 42 name. But why if I repeat this request the outcome would be different? According to RFC 5789 PATCH is neither safe nor idempotent | A PATCH request can be idempotent, but it isn't required to be. That is the reason it is characterized as non-idempotent. Whether PATCH can be idempotent or not depends strongly on how the required changes are communicated. For example, if the patch format is in the form of {change: 'Stock' add: -1} , then any PATCH request after the first one would have a different effect than the first request, i.e. a further decrease in the assumed stock of the product. Another reason for non-idempotency can be that applying the modification on something else than the original resource can render the resource invalid. This would then also be the case if you apply the change multiple times. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/260818",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/120488/"
]
} |
261,063 | I understand how to work with interfaces and explicit interface implementation in C#, but I was wondering if it's considered bad form to hide away certain members that would not be used frequently. For example: public interface IMyInterface
{
int SomeValue { get; set; }
int AnotherValue { get; set; }
bool SomeFlag { get; set; }
event EventHandler SomeValueChanged;
event EventHandler AnotherValueChanged;
event EventHandler SomeFlagChanged;
} I know in advance that the events, while useful, would very rarely be used. Thus, I thought it would make sense to hide them away from an implementing class via explicit implementation to avoid IntelliSense clutter. | Yes, it's generally bad form. Thus, I thought it would make sense to hide them away from an implementing class via explicit implementation to avoid IntelliSense clutter. If your IntelliSense is too cluttered, your class is too big. If your class is too big, it's likely doing too many things and/or poorly designed. Fix your problem, not your symptom. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/261063",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/114597/"
]
} |
261,269 | I am trying to make a very simple todo list application with PHP, MySQL, Jquery templating and JSON... However, my schema seems to complicate things in JSON. What's the best way to do it? A new table for each list, containing the items. or a table for lists, and a table for items that are joined somehow? Because I have tried this and it doesn't seem like the right way to do it? Example http://jsfiddle.net/Lto3xuhe/ | There's a joke I heard awhile back: Q How does a BASIC coder count to 10? A 1,2,3,4,5,6,7,8,9,10 Q How does a C coder count to 10? A 0,1,2,3,4,5,6,7,8,9 Q How does a DBA count to 10? A 0,1,many The truth behind this joke is that once you have two (or more) of the same thing in a database structure (columns or tables), you're doing it wrong. A schema that looks like: +----------+
| id |
| name |
| phone1 |
| phone2 |
| |
+----------+ Is wrong because where will you put a third phone number if someone has it? The same applies to tables themselves. Its also a Bad Thing to be modifying the schema at runtime, which the "new table for each list" seems to imply. (Related: MVC4 : How to create model at run time? ) And thus, the solution is to create a todo list that is comprised of two tables. There are two things you have - lists and items. So, lets make a table structure that reflects this: +----------+ +-------------+
| List | | Task |
+----------+ +-------------+
| id (pk) <---+ | id (pk) |
| name | +---+ listid (fk) |
| | | desc |
| | | |
+----------+ +-------------+ The list has an id (the primary key for the list), and a name. The task has an id (the primary key) a listid (a foreign key) and the description of the task. The foreign key relates back to the primary key of another table. I will point out that this doesn't begin to encompass all the possibilities in various requirements for the software and the table structure to support it. Completed, due date, repeating, etc... these are all additional structures that will likely need to be considered when designing the table. That said, if the table structure isn't one that is appropriately normalized (or realizing the tradeoffs that you've made because it's not normalized), you will have many headaches later. Now, all that relates to writing this as a relational database. But thats not the only type of database out there. If you consider a list to be a document the document styled nosql databases may also offer an approach that isn't wrong. While I'm not going to delve into it too far, there are numerous tutorials out there for todo lists in couch. One such that came up with a search is A simple Task-list application in CouchDB . Another shows up in the couchdb wiki: Proposed Schema For To-Do Lists . In the approach appropriate for a couch, each list is a JSON document stored in the database. You would just put the list in a JSON object, and put it in the database. And then you read from the database. The JSON could look like: [
{"task":"get milk","who":"Scott","dueDate":"2013-05-19","done":false},
{"task":"get broccoli","who":"Elisabeth","dueDate":"2013-05-21","done":false},
{"task":"get garlic","who":"Trish","dueDate":"2013-05-30","done":false},
{"task":"get eggs","who":"Josh","dueDate":"2013-05-15","done":true}
] (from creating a shopping list with a json file on Stack Overflow). Or something approaching that. There is some other record keeping that couch has as part of the document. The thing is, its not the wrong way to approach and a todo list in a document database may be perfectly suited to what you are trying to do with less concept overhead for how to do it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/261269",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/139703/"
]
} |
261,305 | I'm trying to better organize my application architecture, so I've been doing some reading, but I keep running into references to "Business Logic" and "Business Rules". I've never really understood what these actually are. I generally just focus on Use Cases and "User Stories". Could someone explain what Business Logic and Business Rules are, and how they are related to Use Cases? All the definitions I've found seem to pertain to actual businesses, not software development. Because software is not always representative of a business, does that mean that software does not always have business logic? Or... | People use the terms "business rule" and "business logic" to refer to the portion of your application that is specific to your application and represents the core behavior of how things are supposed to work as opposed to generic functionality that could be useful in software written for a different client/business/customer base or code that exists to support the infrastructure of the application. Often business logic is subject to change when the needs of the customer change, so we like to put it in a special place/tier so that we can modify it as needed. Although the term seems to imply otherwise, non-business software also has business logic. For example, a rule that states that "when a user does xyz, the application should validate something" can be classified as a business rule. Utility code, such as parsing/processing/data access and such would not be considered business logic. It's kind of a nebulous term and could mean different things to different people in different contexts. It's not worth getting hung up on. The general idea is to separate your application into logical portions, each of which is responsible for something specific. How exactly this is done is something you learn from experience and working on well-designed large applications. But there aren't any hard and fast rules. Ask three good developers and you'll get six opinions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/261305",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147287/"
]
} |
261,552 | I am implementing a RESTful web service and one of the available actions will be reload . It will be used to reload configurations, cache, etc. We started with a simple GET to an URI like this: ${path}/cache/reload (no parameters are passed, only the URI is called). I am aware that data should not be modified with a GET request. Which is the correct verb to use to invoke an action/command in a RESTful web service? The reload is a command of the REST web service that reload its own cache/configuration/etc. It is not a method that returns information to the client. Probably what I am trying to do isn't REST, but it is still something that need to be done this way. The reload method was only a real example that makes sense in the scope of the application and most answers focused on it, but in fact, I just needed to know which verb to trigger an action that doesn't do CRUD, but still changes data/state. I found this detailed asnwer on Stack Overflow abot the subject: https://stackoverflow.com/questions/16877968/ | If you want to be RESTful don't think of the verb to carry out an action, think of the state you want the resource to be in after the client has done something. So using one of your examples above you have an email queue that is sending emails. You want the client to put that email queue into the state of paused or stopped or something. So the client PUTs a new state to the server for that resource. It can be as simple as this JSON PUT http://myserver.com/services/email_service HTTP/1.1
Content-Type: text/json
{"status":"paused"} The server figures out how to get from the current status (say "running") to "paused" status/state. If the client does a GET on the resource it should return the state it is currently in (say "paused"). The reason to do it this way, and why REST can be so powerful, is that you leave the HOW to get to that state up the server. The client just says "This is the state you should be in now" and the server figures out how to achieve that. It might be a simple flip in a database. It might require thousands of actions. The client doesn't care, and doesn't have to know. So you can completely rewrite/redesign how the server does that and the client doesn't care. The client only needs to be aware of the different states (and their representations) of a resource, not any of the internals. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/261552",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31397/"
]
} |
261,585 | I often run into this problem, especially in Java, even if I think it's a general OOP issue. That is: raising an exception reveals a design problem. Suppose that I have a class that has a String name field and a String surname field. Then it uses those fields to compose the complete name of a person in order to display it on some sort of document, say an invoice. public void String name;
public void String surname;
public String getCompleteName() {return name + " " + surname;}
public void displayCompleteNameOnInvoice() {
String completeName = getCompleteName();
//do something with it....
} Now I want to strengthen the behavior of my class by throwing an error if the displayCompleteNameOnInvoice is called before the name has been assigned. It seems a good idea, doesn't it? I can add a exception-raising code to getCompleteName method.
But in this way I'm violating an 'implicit' contract with the class user; in general getters aren't supposed to throw exceptions if their values aren't set. Ok, this is not a standard getter since it does not return a single field, but from the user point of view the distinction may be too subtle to think about it. Or I can throw the exception from inside the displayCompleteNameOnInvoice . But to do so I should test directly name or surname fields and doing so I would violate the abstraction represented by getCompleteName . It's this method responsibility to check and create the complete name. It could even decide, basing the decision on other data, that in some cases it is sufficient the surname . So the only possibility seems to change the semantic of the method getCompleteName to composeCompleteName , that suggests a more 'active' behavior and, with it, the ability of throwing an exception. Is this the better design solution? I'm always looking for the best balance between simplicity and correctness. Is there a design reference for this issue? | Do not permit your class to be constructed without assigning a name. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/261585",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28667/"
]
} |
261,904 | Sometimes I have clients that from start gives you huge flow of words about "how task should be done in details", like "you must use SQL Server here", "should do this in do-while-loop", "should make two functions like that, and they should run in parallel", etc.. But they never-ever give you any word about "what is this task about?" and "why it's needed?". Even if you ask them about this questions directly, they only can say "I don't know, but this work should be done". Doing this jobs is realty hard and despairing, because you don't understand project as part of some system - there is no system yet, and maybe never be. Is there any way that can help deal with this kind of clients? | Fire them. Refer them to your worst enemy. You don't need the headaches. If they don't know what they want, neither will you, and the amount of trouble you will go through, finding out what is needed, IS NOT WORTH whatever they're planning on paying you. Many years ago, a drinking buddy of mine gave me a piece of wisdom. When considering undertaking any project, ask three questions: What is the problem we are trying to solve? What are the deliverables? How will I know I am finished? If the potential client cannot give clear, cogent, complete answers to ALL THREE questions, neither he nor you are ready to start work on the project. ANY work you put into it at this point must be directed at answering those three questions, AND NO MORE. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/261904",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/134483/"
]
} |
262,019 | I'm cleaning up the includes in a C++ project I'm working on, and I keep wondering whether or not I should explicitly include all headers used directly in a particular file, or whether I should only include the bare minimum. Here's an example, Entity.hpp : #include "RenderObject.hpp"
#include "Texture.hpp"
struct Entity {
Texture texture;
RenderObject render();
} (Let's assume that a forward declaration for RenderObject is not an option.) Now, I know that RenderObject.hpp includes Texture.hpp - I know that because each RenderObject has a Texture member. Still, I explicitly include Texture.hpp in Entity.hpp , because I'm not sure if it's a good idea to rely on it being included in RenderObject.hpp . So: Is it good practice or not? | You should always include all headers defining any objects used in a .cpp file in that file regardless of what you know about what's in those files. You should have include guards in all header files to make sure that including headers multiple times does not matter. The reasons: This makes it clear to developers who read the source exactly what the source file in question requires. Here, someone looking at the first few lines in the file can see that you are dealing with Texture objects in this file. This avoids issues where refactored headers cause compilation issues when they no longer require particular headers themselves. For instance, suppose you realize that RenderObject.hpp doesn't actually need Texture.hpp itself. A corollary is that you should never include a header in another header unless it is explicitly needed in that file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262019",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53493/"
]
} |
262,045 | I have used SignalR to achieve real-time messaging functionality in several of my projects. It seems to work reliably and is very easy to learn to use. The temptation, at least for me, is to abandon developing a Web API service and use SignalR for everything. I feel like this could be achieved by thoughtful design, and if it were, it would mean far less client code would be necessary. More importantly, it would mean that there would be a single interface to services rather than a split interface, and in the worst case, that one could wire this up without thinking about when things get rendered, etc. So, I would like to know: Is there any other reason not to use SignalR in lieu of all web services besides performance? Is SignalR's performance sufficiently concerning that it would not make sense to do so? It has long been a dream of mine to be able to translate server-side object and service definitions to client-side service access code without something silly like node.js . For instance, if I define an interesting object InterestingObject and a service to CRUD the object InterestingObjectService , I can define a standard URL route to the service - say, "/{serviceName}/{methodName}" - but I still need to write client code to access the service. Since the object is going to be passed from client to server and back, there is no practical reason to have to define the object explicitly in client-side code, nor should there be a need to explicitly define the routes to perform CRUD operations. I feel like there should be a way to standardize all of this so that it is possible to write a client under the assumption that service access works from the client to the server and back as transparently as it would if I were writing a WinForms or Java Applet or Native App or what have you. If SignalR is good enough to use in lieu of a traditional web service, it may be a viable way to achieve this. SignalR already includes functionality to make the hub work like service I describe, so I could define a common base (CRUD) service that would offer all of this functionality out-of-the-box with some reflection. Then I could almost take for granted the service access, saving me the annoyance of re-writing code to access something that could be accessed by convention - and more importantly, the time I would have to spend writing code to define how this is updated in the DOM. After reading my edit I feel like it may be a little nonsensical so please feel free to ask me if you have questions about what I am getting at. Basically, I want the service access to be as transparent as possible. | Those two technologies have a very different purpose. REST is for ordinary calls to an API, with client being an active actor of the exchange. When the client needs to find GPS coordinates of an address, the client initiates the call to the API and waits until it receives the coordinates, or a error occurs, or a timeout elapses. Web sockets are for everything which needs to do things the opposite way. For example, when I use an intranet website which shows me in real time the logs and the performance of different servers, the client may be passive and wait until the server sends him a newly published log message or performance metrics. The difference is clear: in the first case, the client decides when it needs a specific piece of information; in the second case, the client simply waits to be contacted, and may not know when it would be. In some way, both are interchangeable: you can implement web sockets when you don't need them (i.e. the client will call the server through web sockets instead of making a REST call) and you can use polling or long polling as a substitute to web sockets (given that this was used successfully for years until web sockets became so popular). But their interchangeability comes at a cost: When you use polling or long polling instead of web sockets, you're often wasting bandwidth. When you use web sockets to do what can be done through web api, you keep all the connections from all the active clients opened, which may not be what you really want. For a small website where you expect to have at most 5 clients at the same time, this is not an issue. For a service such as Amazon AWS, this wouldn't be easy to solve technically. Don't use web sockets when you don't need them. To get GPS coordinates of an address, I gain nothing in opening web sockets connection, making the call, waiting for an answer and closing the connection: REST fulfills my needs for such scenarios. If you find yourself repeatedly and frequently checking for information through a REST call to a service, this may be a good sign that you should move to web sockets. Similarly, Stack Overflow reduces bandwidth usage by using web sockets, since it helps people to not spend their time pressing F5 on the home page to see if they have new messages. If you find that you open web sockets connections, use them to make a single call, and then close them, or if your connections remain opened but the server is sending something to the client only on client's request, switch to REST. Also, web sockets have still a limited support and are not always easy to implement. While SignalR makes it easy to implement, this doesn't mean that you won't have any difficulties to implement it in other languages/contexts/environments. With REST, that's easy: it may be a curl call or a similar feature available in every mainstream language. With web sockets, you can't be sure how long would it take to make a client using [insert the name of a language you don't know yet here]. I've used web sockets in several projects in .NET, Python and node.js. In .NET, it wasn't too difficult, but still, I've still spent a few days trying to figure out some cryptic problems, such as the connection dropped as soon as it is opened. (This was prior to SignalR; I never tried SignalR). I also used WCF in web sockets mode, which wasn't without issues either (but I believe that WCF always comes with issues). In node.js, this was doable, but I had to switch twice the libraries until I've found one which works. I believe I've spent at least a week trying to make a web sockets Hello World. In Python, I tried once, spent two or three days, and abandoned. It never worked. Compare this to REST: the only problems one can encounter with a new language/framework is to know how to POST files or receive a very large binary response. I remember spending a few hours searching for solutions for some languages. Still, a few hours for a special case is nothing compared to days or weeks for a simple Hello World. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262045",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51007/"
]
} |
262,057 | I have a private (i.e. no chance of sharing the source) and commercial application, now I would like to use a library which is under the Apache 2.0 license . I've read the Apache license and FAQ section, but I am not clear about this. Is it the same as GPL3 which forces the application to provide the source code? | The Apache 2.0 license is very different from the GPL license, in at least two aspects: Under the Apache 2.0 license, you are allowed to distribute binaries without providing the source code with it. (Under the GPL, you must always provide the source code) The GPL license carries over to the entire application. The Apache 2.0 license does not and applies only to those parts that explicitly state they fall under the Apache 2.0 license. This means that if you use a library with Apache 2.0 license in your project, the permissions/rights/obligations from the Apache 2.0 license do not suddenly carry over to your code. To distribute a (binary or unmodified) copy of an Apache 2.0 licensed library with your application, you must meet two requirements: The users of your application must receive a copy of the Apache 2.0 license. To avoid confusion, you should also state which parts of the distribution the license applies to. The users of your application must receive a copy of the NOTICES file that came with the library, if there is such a file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262057",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
262,227 | According to the Wikipedia article , UTF-8 has this format: First code Last code Bytes Byte 1 Byte 2 Byte 3 Byte 4
point point Used
U+0000 U+007F 1 0xxxxxxx
U+0080 U+07FF 2 110xxxxx 10xxxxxx
U+0800 U+FFFF 3 1110xxxx 10xxxxxx 10xxxxxx
U+10000 U+1FFFFF 4 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
x means that this bit is used to select the code point. This wastes two bits on each continuation byte and one bit in the first byte. Why is UTF-8 not encoded like the following? First code Last code Bytes Byte 1 Byte 2 Byte 3
point point Used
U+0000 U+007F 1 0xxxxxxx
U+0080 U+3FFF 2 10xxxxxx xxxxxxxx
U+0800 U+1FFFFF 3 110xxxxx xxxxxxxx xxxxxxxx It would save one byte when the code point is out of the Basic Multilingual Plane or if the code point is in range [U+800,U+3FFF]. Why is UTF-8 not encoded in a more efficient way? | This is done so that you can detect when you are in the middle of a multi-byte sequence. When looking at UTF-8 data, you know that if you see 10xxxxxx , that you are in the middle of a multibyte character, and should back up in the stream until you see either 0xxxxxx or 11xxxxxx . Using your scheme, bytes 2 or 3 could easily end up with patters like either 0xxxxxxx or 11xxxxxx Also keep in mind that how much is saved varies entirely on what sort of string data you are encoding. For most text, even Asian text, you will rarely if ever see four byte characters with normal text. Also, people's naive estimates about how text will look are often wrong. I have text localized for UTF-8 that includes Japanese, Chinese and Korean strings, yet it is actually Russian that takes most space. (Because our Asian strings often have Roman characters interspersed for proper names, punctuation and such and because the average Chinese word is 1-3 characters while the average Russian word is many, many more.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262227",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/155583/"
]
} |
262,346 | Suppose I have a function that does things with a text file - for example reads from it and removes the word 'a'. I could either pass it a filename and handle the opening/closing in the function, or I could pass it the opened file and expect that whoever calls it would deal with closing it. The first way seems like a better way to guarantee no files are left open, but prevents me from using things like StringIO objects The second way could be a little dangerous - no way of knowing if the file will be closed or not, but I would be able to use file-like objects def ver_1(filename):
with open(filename, 'r') as f:
return do_stuff(f)
def ver_2(open_file):
return do_stuff(open_file)
print ver_1('my_file.txt')
with open('my_file.txt', 'r') as f:
print ver_2(f) Is one of these generally preferred? Is it generally expected that a function will behave in one of these two ways? Or should it just be well documented such that the programmer can use the function as appropriate? | Convenient interfaces are nice, and sometimes the way to go. However, most of the time good composability is more important than convenience , as a composable abstraction allows us to to implement other functionality (incl. convenience wrappers) on top of it. The most general way for your function to use files is to take an open file handle as parameter, as this allows it to also use file handles that are not part of the filesystem (e.g. pipes, sockets, …): def your_function(open_file):
return do_stuff(open_file) If spelling out with open(filename, 'r') as f: result = your_function(f) is too much to ask of your users, you could choose one of the following solutions: your_function takes an open file or a file name as parameter. If it is a filename, the file is opened and closed, and exceptions propagated. There is a bit of an issue with ambiguity here which could be worked around using named arguments. Offer a simple wrapper that takes care of opening the file, e.g. def your_function_filename(file):
with open(file, 'r') as f:
return your_function(f) I generally perceive such functions as API bloat, but if they provide commonly used functionality, the gained convenience is a sufficiently strong argument. Wrap the with open functionality in another composable function: def with_file(filename, callback):
with open(filename, 'r') as f:
return callback(f) used as with_file(name, your_function) or in more complicated cases with_file(name, lambda f: some_function(1, 2, f, named=4)) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262346",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/143011/"
]
} |
262,571 | I am not new to programming, but I am one that started a few years ago, and I do love templates. But in the before times, how did people deal with situations where they needed compile-time code generation like templates? I'm guessing horrible, horrible macros (at least that's how I'd do it), but googling the above question only gets me pages and pages of template tutorials. There are many arguments against using templates, and while it typically boils down to readability, " YAGNI ", and complaining about how poorly it is implemented, there is not a lot out there on the alternatives with similar power. When I do need to do some sort of compile-time generics and do want to keep my code DRY , how does/did one avoid using templates? | Besides the void * pointer which is covered in Robert's answer , a technique like this was used (Disclaimer: 20 year old memory): #define WANTIMP
#define TYPE int
#include "collection.h"
#undef TYPE
#define TYPE string
#include "collection.h"
#undef TYPE
int main() {
Collection_int lstInt;
Collection_string lstString;
} Where I have forgotten the exact preprocessor magic inside collection.h , but it was something like this: class Collection_ ## TYPE {
public:
Collection_ ## TYPE () {}
void Add(TYPE value);
private:
TYPE *list;
size_t n;
size_t a;
}
#ifdef WANTIMP
void Collection_ ## TYPE ::Add(TYPE value)
#endif | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262571",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108367/"
]
} |
262,855 | I've been doing web programming for a long time now, and somewhere, I lost track of why we are doing what we are doing today (or how did we come to do things this way)? I started out with basic ASP web development, and very early on, display and business logic were mixed on the page. Client-side development varied wildly (VBScript, different flavors of JavaScript), and we had plenty of warning about server-side validations (and so I stayed away from client-side logic). I then moved to ColdFusion for a while. ColdFusion was probably the first web development framework that separated display and business logic using their tags. It seemed very clear to me, but very verbose, and ColdFusion was not in high market demand, and so I moved on. I then jumped on the ASP.NET band wagon and started using their MVC approach. I also realized Java seemed to be an ivory tower language of enterprise systems, and also tried their MVC approach. Later on, ASP.NET developed this MVVM design pattern, and Java (precisely, J2EE or JEE) also struggled and came out with its MVC2 approaches. But today, what I have discovered is that backend programming is not where the excitement and progress is anymore. Also, server-side based MVC practices seem to be obsolete (do people really use JSTL anymore?). Today, in most projects that I am on, I found out that JavaScript frameworks and client-side development is where all the exciting and innovative progresses are being made. Why has this movement from server to client-side development taken place? I did a simple line count of one of my JEE projects, and there are more lines of code in JavaScript than Java (excluding third-party libraries). I find that most backend development using programming languages such as Java or C# is simply to produce a REST-like interface, and that all the hard effort of display, visualization, data input/output, user interactions, etc... are being addressed via client-side framework like Angular, Backbone, Ember, Knockout, etc... During the pre-jQuery era, I saw plenty of diagrams where there was a clear, conceptual line between the M, V, and C in MVC in n-tier development. Post-jQuery, where are these lines drawn? It seems MVC and MVVM are all right there in JavaScript code, client-side. What I want to know is, why did we make such a transition (from the emphasis of server-side programming to client-side, from favoring compiled languages to scripting languages, from imperative to functional programming, all of these seem to have occurred simultaneously) and what problems did this transition/shift solve? | Shifting the computing load between the server and the client is a cyclical phenomenon, and has been so for quite some time. When I was in community college the Personal Computer was just getting a head of steam. But Ethernet was not in widespread use yet, and nobody had a local area network. Back then, the college had a mainframe that handled student records and served as a platform for programming classes. The administration had terminals that were connected to the mainframe on a time sharing basis, but the students had to punch cards to get their programming assignments done, an arduous process. Eventually, they put in a lab where the students could sign up for time on a terminal, but it still took maybe a half hour or so to get your half-inch thick printout of errors. All of the processing work was done on the mainframe (the server). But mainframes were expensive, so companies started putting up local area networks, and the processing load shifted from the server to the individual client machines, which were powerful enough to run individual word processing, spreadsheet and line of business applications, but not powerful enough to share their processing power with others. The server was a similar machine with similar capabilities (perhaps more memory and hard drive space), but was mostly used to share files. This was called Client/Server. Most of the processing had shifted to the client computers. One of the drawbacks of doing all of the processing on the client machines was that you got locked into this perpetual cycle of software installation and upgrades, and all of the headaches that go with that. The programming model of these machines (event-based, code-behind user interfaces) encouraged the creation of messy, difficult to maintain programs (big balls of mud). Most end-users didn't have the skills to maintain their hardware and software properly, necessitating armies of IT maintenance personnel. As the computers became increasingly more powerful, divisions of labor became possible. Now you could have file servers, database servers, web servers, print servers and so on. Each machine could be somewhat optimized for the task it was provided, and maintained by someone with the requisite expertise. Programs could be written that ran in the web browser, so client installations were no longer required. This was called Multi-Tier or n-Tier. Browsers were essentially used as dumb terminals, just like in the mainframe days, though the method of communicating with the server was more sophisticated, less proprietary, and based on interrupt mechanisms rather than time-sharing and polling. Processing had shifted back to the server(s). However, web development came with a whole new set of headaches. Most line of business applications written for the browser were static forms and reports. There was very little interactivity available in the UI. Javascript hadn't found its second wind yet, and there were major problems with browser incompatibilities that discouraged its widespread adoption. However, things have gotten much better. HTML5 and CSS3 provide substantial new capabilities to the browser programming model, jQuery came out and helped a whole generation of programmers discover how useful Javascript could be. New front-end UI frameworks emerged. It became possible to write highly-interactive UI's in the browser, even complete games. Processing shifted back to the client again. Today, you can buy processing power in the cloud, as much or as little as you like, and run programs on the server. I'd say we're now in a place where, as a software developer, you have lots of choices about where you can execute your processing power, both on the client and on the server. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262855",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/156226/"
]
} |
262,906 | While creating a RESTful API , should I use HTTP Verbs on the same URL (when it's possible) or should I create an specific URL per action? For example: GET /items # Read all items
GET /items/:id # Read one item
POST /items # Create a new item
PUT /items/:id # Update one item
DELETE /items/:id # Delete one item Or with specific URLs like: GET /items # Read all items
GET /item/:id # Read one item
POST /items/new # Create a new item
PUT /item/edit/:id # Update one item
DELETE /item/delete/:id # Delete one item | In your latter scheme, you keep verbs in the URLs of your resources. This should be avoided as the HTTP verbs should be used for that purpose. Embrace the underlying protocol instead of ignoring, duplicating or overriding it. Just look at DELETE /item/delete/:id , you place the same information twice in the same request. This is superfluous and should be avoided. Personally, I'd be confused with this. Does the API actually support DELETE requests? What if I place delete in the URL and use a different HTTP verb instead? Will it match anything? If so, which one will be chosen? As a client of a properly designed API, I shouldn't have to ask such questions. Maybe you need it to somehow support clients that cannot issue DELETE or PUT requests. If that's the case, I'd pass this information in an HTTP header. Some APIs use an X-HTTP-Method-Override header for this specific purpose (which I think is quite ugly anyway). I certainly wouldn't place the verbs in the paths though. Go for GET /items # Read all items
GET /items/:id # Read one item
POST /items # Create a new item
PUT /items/:id # Update one item
DELETE /items/:id # Delete one item What's important about the verbs is that they are already well-defined in the HTTP specification and staying in line with these rules allows you to use caches, proxies and possibly other tools external to your application that understand the semantics of HTTP but not your application semantics. Please note that the reason you should avoid having them in your URLs is not about RESTful APIs requiring readable URLs. It's about avoiding unnecessary ambiguity. What's more, a RESTful API can map these verbs (or any subset thereof) to any set of application semantics, as long as it doesn't go against the HTTP specification. For example, it is perfectly possible to build a RESTful API that only uses GET requests if all operations that it allows are both safe and idempotent . The above mapping is just an example that fits your use case and is compliant with the spec. It doesn't necessarily have to be like this. Please also mind that a truly RESTful API should never require a programmer to read extensive documentation of available URLs as long as you conform to the HATEOAS (Hypertext as the Engine of Application State) principle, which is one of the core assumptions of REST . The links can be utterly incomprehensible to humans as long as the client application can understand and use them to figure out possible application state transitions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262906",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/130282/"
]
} |
262,916 | Recently Microsoft has released a free version of Visual Studio: Visual Studio Community Edition the license says IF YOU COMPLY WITH THESE LICENSE TERMS, YOU HAVE THE RIGHTS BELOW. INSTALLATION AND USE RIGHTS. a. Individual license. If you are an individual working on your own
applications to sell or for any other purpose, you may use the
software to develop and test those applications. b. Organization licenses. If you are an organization, your users may
use the software as follows: · Any number of your users may use the software to develop and
test your applications released under Open Source Institute
(OSI)-approved open source software licenses. · Any number of your users may use the software to develop and
test your applications as part of online or in person classroom
training and education, or for performing academic research. · If none of the above apply, and you are also not an enterprise
(defined below), then up to 5 of your individual users can use the
software concurrently to develop and test your applications. · If you are an enterprise, your employees and contractors may
not use the software to develop or test your applications, except for
open source and education purposes as permitted above. An “enterprise”
is any organization and its affiliates who collectively have either
(a) more than 250 PCs or users or (b) more than one million US dollars
(or the equivalent in other currencies) in annual revenues, and
“affiliates” means those entities that control (via majority
ownership), are controlled by, or are under common control with an
organization. c. Demo use. The uses permitted above include use of the software in
demonstrating your applications. d. Backup copy. You may make one backup copy of the software, for
reinstalling the software. As an "Individual" I'm interested in the clause "a", however it's not that clear and explicit. for me it sounds a bit restrictive as it does not cover a wide range of usage (Open source, freelance work, contribution to applications you don't own etc), the confusion comes exactly from the terme ' OWN ' used in the sentence,
I may be misinterpreting the whole thing as English is not my native language.
So how would you interpret the sentence?
Can we assume that we can use a software if the license does not make it clear, for example "it's not allowed to use it in this or that senario" as for "Entreprises" in the clause "b"? | It looks like size of your client is important. From Visual Studio 2013 and MSDN Licensing Whitepaper - November-2014 page 10: "Example 2: A Fortune 500 firm has outsourced the development of its store-locator mobile application to a small agency. The application is not an open source project. The agency has 5 employees working on the project and would like to use Visual Studio Community 2013. Since the agency is a contractor developing this application for the Fortune 500 firm, and since the application is not an open source project, the agency cannot use Visual Studio Community 2013 for developing and testing the application. " So your small team can't develop customized app for big company. Don't know what about boxed apps. Don't know what about "individual". I've done some more research and it looks like small teams can sell apps build with VS2013Comm. There are no restrictions in EULA who can buy it. I guess the key words are sell and outsource . When you sell, it's still your app. While outsourcing, usually app isn't yours but clients.
That's my story and I'm stickin to it. Let me know if you think I'm wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/262916",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/83927/"
]
} |
263,057 | Wikipedia says, that closure - is a function, which has an access to variables, declared outside of the function. There is even an example: function startAt(x)
function incrementBy(y)
return x + y
return incrementBy
variable closure1 = startAt(1)
variable closure2 = startAt(5) But according to most programming languages (including python, javascript, swift, etc.) the next example is correct (written in python): # Script starts
test = "check"
def func(test2):
def func2():
return test2 == test
return func2()
print func("check") // returns TRUE
# Script ends Basically, func is not a closure, but it obviously uses variable test , declared outside of the function. Does that mean func IS a closure? Even in C++ you can run this: std::string test = "check";
bool func(std::string test2) {
if (test2 == test)
return true;
else
return false;
}
int main() {
if (func("check"))
std::cout << "TRUE" << std::endl; // prints TRUE
} Eventually, this makes every function being a closure. Where am I wrong? | No, not every function is a closure. Wikipedia says: ... closure ... is a function or reference to a function together with a referencing environment — a table storing a reference to each of the non-local variables (also called free variables or upvalues) of that function. I'd add "non-local and non-global", but the idea is correct. Neither your C++ nor Python examples use closures. In both cases it's just scoping rules allow functions to see their outer scope and global scope. "Closure" happens in the 1st example - incrementBy is constructed in and then returned from it's outer function, capturing argument x . When you assign variable closure1 = startAt(1) , you end up having a closure (function) inside closure1 var which captured argument, which value happened to be 1 , so when you call closure1(2) the result is 3 ( 1 + 2 ). Think of it as memorizing some information about closure's declaration scope: incrementBy retain a memory about insides of startAt , specifically a value of it's argument x . In lambda calculus, as I know, those "non-local" variables are called "free", and functions with free variables are called "open terms". Closure is a process of "closing" open terms by "fixing" values of those free variables in aforementioned "environment table". Hence the name. It's worth noting that in Python and JS closure happens implicitly, while in PHP you have to explicitly tell which variables you want to close over (capture): http://php.net/manual/en/functions.anonymous.php - note use keyword in declarations: // equivalent to the 1st example
function startAt($x) { // vvvvvvvv vv
$incrementBy = function ($y) use ($x) { return $x + $y };
return $incrementBy;
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263057",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/122150/"
]
} |
263,164 | Why does every serious Github repo I do pull requests for want me to squash my commits into a single commit? I thought the git log was there so you could inspect all your history and see exactly what changes happened where, but squashing it pulls it out of the history and lumps it all into one commit. What is the point? This also seems to go against the "commit early and commit often" mantra. | So that you have a clear and concise git history that clearly and easily documents the changes done and the reasons why. For example a typical 'unsquashed' git log for me might look like the following: 7hgf8978g9... Added new slideshow feature, JIRA # 848394839
85493g2458... Fixed slideshow display issue in ie
gh354354gh... wip, done for the week
789fdfffdf... minor alignment issue
787g8fgf78... hotfix for #5849564648
9080gf6567... implemented feature # 65896859
gh34839843... minor fix (typo) for 3rd test What a mess! Whereas a more carefully managed and merged git log with a little additional focus on the messages for this might look like: 7hgf8978g9... 8483948393 Added new slideshow feature
787g8fgf78... 5849564648 Hotfix for android display issue
9080gf6567... 6589685988 Implemented pop-up to select language I think you can see the point of squashing commits generally and the same principle applies to pull requests - Readability of the History. You may also be adding to a commit log that already has hundreds or even thousands of commits and this will help keep the growing history short and concise. You want to commit early and often. It's a best practice for many reasons. I find that this leads me to frequently have commits that are "wip" (work-in-progress) or "part A done" or "typo, minor fix" where I am using git to help me work and give me working points that I can go back to if the following code isn't working out as I progress to get things working. However I do not need or want that history as part of the final git history so I may squash my commits - but see notes below as to what this means on a development branch vs. master. If there are major milestones that represent distinct working stages it's still ok to have more than one commit per feature/task/bug. However this can often highlight the fact that the ticket under development is 'too big' and needs to be broken down into smaller pieces that can standalone, for example: 8754390gf87... Implement feature switches seems like "1 piece of work". Either they exist or they don't! Doesn't seem to make sense to break it out. However experience has shown me that (depending on organizational size and complexity) a more granular path might be: fgfd7897899... Add field to database, add indexes and a trigger for the dw group
9458947548g... Add 'backend' code in controller for when id is passed in url.
6256ac24426... Add 'backend' code to make field available for views.
402c476edf6... Add feature to UI Small pieces mean easier code reviews, easier unit testing, better opportunity to qa, better alignment to Single Responsibility Principle, etc. For the practicalities of when to actually do such squashes, there are basically two distinct stages that have their own workflow your development, e.g. pull requests your work added to the mainline branch, e.g. master During your development you commit 'early and often' and with quick 'disposable' messages. You may wish to squash here sometimes, e.g. squashing in wip and todo message commits. It is ok, within the branch, to retain multiple commits that represent distinct steps you made in development. Most of the squashes you choose to do should be within these feature branches, while they are being developed and before merge to master. When adding to the mainline branch you want to the commits to be concise and correctly formatted according to the existing mainline history. This might include the Ticket Tracker system ID, e.g. JIRA as shown in examples. Squashing doesn't really apply here unless you want to 'roll-up' several distinct commits on master. Normally you don't. Using --no-ff when merging to master will use one commit for the merge and also preserve history (in the branch). Some organizations consider this to be a best practice. See more at https://stackoverflow.com/q/9069061/631619 You will also see the practical effect in git log where a --no-ff commit will be the latest commit, at the top of the HEAD (when just done), whereas without --no-ff it may be further down in the history, depending on dates and other commits. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263164",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/156595/"
]
} |
263,342 | The linked "duplicate" question is an iffy match at best, because it's asking is pattern X OK ( YES / NO ) and I'm clearly already in the NO camp, and subsequently asking what is pattern X called what steps can be taken to fix pattern X (neither of which are addressed by the linked question). I recently did a code review on a block of code that looked something like this: public class MyClass
{
private ISomething mySomething;
// ...Other variables omitted for brevity
public MyClass() { mySomething = new Something(); }
/// <summary>
/// Constructor - ONLY USE THIS FOR UNIT TESTING
/// </summary>
public MyClass(ISomething something) { mySomething = something; }
public void MyMethod()
{
// Gets called by the framework, and changes the internal state of the class by using mySomething...
}
// Other methods...
} I'm concerned specifically with the overloaded constructor. It was added purely to test this class, and will make its way into production code. Is there a name for this pattern/anti-pattern, and what can be done to solve it? For clarification, the implementation of Something was added specifically for the purpose of being able to add an overloaded constructor to MyClass . It's used nowhere else. Its existence is an instance of the very issue I'm concerned about. ISomething is very tightly coupled to MyClass . It needn't have been extracted. Implementation and interface might as well look like: public interface ISomething
{
string GetClassName();
}
public class Something : ISomething
{
public string GetClassName() { return "MyClass"; }
} That means that MyClass.MyMethod() 's body could just be replaced with return "MyClass"; However, the interface abuse/premature optimization seems like a separate issue and not in the spirit of the original question (i.e. consider it a given that the class/interface is structured like so and leave it as a separate [but valid] concern). | For methods of a class which are solely for testing purposes, I have seen the name maintenance hatch in the past. And similar to real maintenance hatches in physical machines, those methods sometimes have their purpose. For example, if you are going to make some legacy code testable when it has grown too big after some years of evolving, maintenance hatches can be of great value. But I also agree to the other answers here, such methods should be an exception, and when you keep classes and components small, with well designed interfaces, you seldom need them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263342",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44090/"
]
} |
263,589 | I have recently joined a development project and was suddenly given the job of lead developer. My primary responsibility is to break up the programming part of the project into tasks, give these tasks to the other developers, and then make sure that the pieces work together. The problem though is I haven't a clue how to do this. I've spent my weekend with a pencil and paper trying to figure it out but I keep coming up with a list of tasks to be worked on sequentially instead of in parallel. I've thought of maybe splitting it up into features but then you end up with tasks that require editing the same files, which could require an entire task to be completely rewritten because of how early in development we are. I could have some developers wait until the program is a bit more complete and easier to create tasks for, but then I would have people sitting on their hands for who knows how many weeks. I had a conversation with my boss about my qualifications to do this and I wasn't given a choice in the matter. I have no idea what I'm doing, so any tips and nudges in the right direction would be greatly appreciated. | A proper answer to your question fills several books . I'll come up with a bullet list of buzz words which come into my mind about this, Google and books will do the rest for you. Basics Don't go alone . Try to involve your team-mates as much as possible. Travel lightweight . Democracy, but not too much. Sometimes, it's not about what satisfies the biggest number of people, but what hurts the least number of people. Cynefin: Understand your domain - chaotic, complex, complicated, or clear? And beware of confusion. Keep what (needs to be done) and how (it is done) separate . Learn about Scrum ("what"), XP (Extreme Programming, "how"), Kanban ("how much", Flow), Lean ("what not") and DevOps ("with whom"). Lean is avoiding TIMWOODS, the 8 wastes (Transport, Inventory, Motion, Waiting, Overprocessing, Overproduction, Defects, and Skill), or DOWNTIME (Defects, Overproduction, Waiting, Non-utilized talent, Transportation, Inventory, Motion, Extra processing) Lean also is about flow : For overall efficiency, flow efficiency is usually more important than individual efficiency. Learn about Software Craftsmanship , Clean Code and Pragmatic Programming . Good architecture is about maximizing the number of decisions not taken . Scrum / XP / Lean / Agile is about maximizing the amount of work not done : YAGNI . The Primary Value of Software is that you can easily change it. That it does what it should do is important but that's only its Secondary Value. Prefer an iterative and incremental approach, use time boxes for almost everything, especially meetings, make Parkinson's Law your friend because Hofstadter's Law applies. Balance team structure with an understanding of Conway's Law and Tuckman's Stages of Team Development . Programming is a quaternity, it is science , engineering , art and craft all at the same time, and those need to be in balance. Just because Scrum / XP / XYZ is good for someone (including me) doesn't necessarily mean it's good for you / suits your environment. Don't blindly follow the hype, understand it first. Inspect and Adapt! (Scrum Mantra) Avoid Duplication - Once and only Once! (XP Mantra) aka DRY - Don't Repeat Yourself aka SPOT - Single Point of Truth "What world" work breakdown process Collect Requirements as User Stories / Job Stories into a Product Backlog . User (of User Story) similar to Actor (in UML) similar to Persona similar to Role . Refine User Stories until they meet your team's Definition of Ready based on INVEST (Independent, Negotiable, Valuable, Estimable, Small, Testable). (Scrum Meeting: Backlog Refinement ) Sort the Product Backlog by Business Value . Don't start work on a Story before it's Ready Ready (ready according to the definition of ready). Use Planning Poker to estimate the effort of Stories in Story Points . Use Triangulation Comparison to ensure consistency of the estimates. The value of estimates isn't the number that comes out. It's the shared understanding. Yesterday's weather is the best estimate, hope the worst. Split Stories if they are too big. Improve delivery culture with a Definition of Done . Don't accept the implementation of a User Story before it's Done Done (done according to the Definition of Done). Multiple teams on the same code base should agree on and share the same Definition of Done (especially the Coding Standards ). Check your progress with Burndown Charts . Regularly check with your Stakeholders whether what the team delivers is what's really needed. (Scrum Meeting: Sprint Review ) Story Breakdown List and Describe Users / Personas / Actors / Roles (Product Owner) Epic -> Stories (User Story or Job Story) (Product Owner) Story -> Acceptance Criteria (Product Owner) Story -> Subtasks (Dev Team) Acceptance Criteria -> Acceptance Tests (Spec: Product Owner, Impl: Dev Team) Start with a Walking Skeleton which is a minimalistic End-to-(Half-End) . Create an MVP - Minimum Viable Product . Expand the MVP using SMURFS - Specifically Marketable, Useful, Releasable Feature Sets . "How world" realization Apply the 4 Rules of Simple Design: 1. Passes the tests. 2. Communicates intent. 3. No redundancy. 4. Fewest elements. (And understand why J. B. Rainsberger argues that no redundancy is more important than communicating intent, and why I (Christian Hujer) argue that communicating intent is more important than passing the tests.) Use OOA/D , UML and CRC Cards , but avoid the big design upfront . Implement object-oriented , structured and functional at the same time as much as possible, regardless of the programming language. Use Version Control (preferably distributed ). Start with Acceptance Tests . Apply TDD , letting the Three Laws of TDD drive you through the Red-Green-Refactor-Cycle in the steps from the TPP ( Transformation Priority Premise ), with Single-Assert-Rule , 4 A's , GWT (Given When Then) from BDD . " Unit Tests are tests which run fast ." — Michael Feathers Apply the SOLID and the package principles to manage Coupling and Cohesion .
Example: S in SOLID is SRP = Single Responsibility Principle, significantly reduces the number of edit- resp. merge-conflicts in teams. Know Law of Demeter / Tell, Don't Ask . Use Continuous Integration , if applicable even Continuous Delivery (DevOps). Use Collective Code Ownership based on an agreed common Coding Standard (which should be part of the Definition of Done ). Apply Continuous Design Improvement (fka Continuous Refactoring). The Source Code is the Design . Still upfront thinking is indispensable, and nobody will object a few good clarifying UML diagrams. XP doesn't mean no day is architecture day, it means every day is architecture day. It's a focus on architecture, not a defocus, and the focus is in the code. Keep your Technical Debt low, avoid the five design smells: Fragility , Rigidity , Immobility , Opacity , and Viscosity . Architecture is about business logic, not about persistence and delivery mechanisms. Architecture is a team sport ( there is no 'I' in Architecture ). Design Patterns , Refactoring and the Transformation Priority Premise . Project Code is the ATP-Trinity with priorities: 1. Automation Code , 2. Test Code , 3. Production Code . Regularly check with your team peers whether how the team delivers can be improved. (Scrum Meeting: Sprint Retrospective ) Tests should be FIRST - Fast, Independent, Repeatable, Self-Validating and Timely. Above list is certainly incomplete, and some parts might even be disputable! If all this scares you - don't worry, because it should scare you! Succeeding software development projects in teams is not an easy task, and rarely are people properly trained and educated in this art. If this scares you, your intuition is working properly, listen to it. You want to be prepared. Talk to your boss, get some time and training. See Also Who writes the technical 'user stories' in scrum Further reading (online) Portland Pattern Repository by Ward Cunningham et al Principles of OOD by Robert "Uncle Bob" C. Martin Pragmatic Programmer Tips Further reading (books) Clean Code by Robert C. Martin Agile Software Development: Principles, Patterns, and Practices by Robert C. Martin The Pragmatic Programmer - From Journeyman to Master by Andrew Hunt and David Thomas Working Effectively with Legacy Code by Michael Feathers Refactoring - Improving the Design of Existing Code by Martin Fowler Refactoring to Patterns by Joshua Kerievsky The Ten Day MBA by Steven Silbiger (sic!) Domain-Driven Design by Eric Evans User Stories Applied by Mike Cohn Object-Oriented Analysis and Design with Applications by Gray Booch et al Design Patterns by the Gang of Four Test Driven Development by Kent Beck Extreme Programming by Kent Beck [if Java] Effective Java by Joshua Bloch | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263589",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157114/"
]
} |
263,595 | This is a question I was asked at a job interview, and I can't figure out the answer they were looking for, so I'm hoping someone here might have some ideas. The goal is to write a function that is guaranteed to never return the same value twice. Assume that this function will be accessed by multiple machines concurrently. My idea was to assign each machine a unique id and pass that value into the unique value generator function: var i = 0;
function uniq(process_id, machine_id) {
return (i += 1).toString() + machine_id + "-" + process_id;
} This would avoid the fallout from race conditions since even if two or more processes read the same value for i , each return value is tagged a unique combination of process id and machine id. However, my interviewer didn't like this answer because bringing another machine online involves assigning it an id. So can anybody think of another way to solve this that doesn't involve configuring each machine to have a unique id? I'd like to have an answer in case this question comes up again. Thanks. | Don't get fancy, just toss a simple (threadsafe) counter behind some communication endpoint (WCF, web service, whatever): long x = long.MinValue;
public long ID(){
return Interlocked.Increment(ref x);
} Yes, it will eventually overflow. Yes, it doesn't handle reboots. Yes, it's not random. Yes, someone could run this on multiple servers. This is the simplest thing that satisfies practical requirements. Then let them be the ones that follow up with those problems (to make sure they understand the limitations, do they really think you need more than 2^64 ids), so you can then ask about what trade-offs are okay. Does it need to survive reboots? What about hard drive failure? What about nuclear war? Does it need to be random? How random? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263595",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157119/"
]
} |
263,651 | I've heard in a number of places now that people expect languages to use, or at least have, a self-hosting compiler in order to deserve respect. I'm curious as to why this is. A compiler seems like a very significant piece of software to write, and I imagine not all languages are well-suited to creating them. Wouldn't it make more sense to spend the effort working in something that will give better results? | Wouldn't it make more sense to spend the effort working in something that will give better results? Like what? The nice thing about compilers is that they don't have many dependencies. This makes them good candidates for a new language that likely doesn't have a very large or diverse standard library yet. Better yet, they require a variety of things, while also being well studied. The variety helps make sure that your example tests various parts of the language. Being well studied means that you have other compilers to compare against - as well as giving more credence to academic sorts that you know what you're doing. And while compilers seem like a ton of work, they're pretty small in the grand scheme of things. If the language implementers can't even do something they've done before in the new language, how are they going to do novel things? How are they going to handle the really big stuff like standard libraries or an IDE? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263651",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20675/"
]
} |
263,726 | Consider an e-commerce site, where Alice and Bob are both editing the product listings. Alice is improving descriptions, while Bob is updating prices. They start editing the Acme Wonder Widget at the same time. Bob finishes first and saves the product with the new price. Alice takes a bit longer to update the description, and when she finishes, she saves the product with her new description. Unfortunately, she also overwrites the price with the old price, which was not intended. In my experience, these issues are extremely common in web apps. Some software (e.g. wiki software) does have protection against this - usually the second save fails with "the page was updated while you were editing". But most web sites do not have this protection. It's worth noting that the controller methods are thread-safe in themselves. Usually they use database transactions, which make them safe in the sense that if Alice and Bob try to save at the precise same moment, it won't cause corruption. The race condition arises from Alice or Bob having stale data in their browser. How can we prevent such race conditions? In particular, I'd like to know: What techniques can be used? e.g. tracking the time of last change. What are the pros and cons of each. What is a helpful user experience? What frameworks have this protection built in? | You need to "read your writes", which means before you write down a change, you need to read the record again and check if any changes where made to it since you last read it. You can do this field-by-field (fine-grained) or based on a timestamp (coarse-grained). While you do this check you need an exclusive lock on the record.
If no changes were made, you can write down your changes and release the lock. If the record has changed in the meantime, you abort the transaction, release the lock and notify the user. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263726",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108835/"
]
} |
263,735 | I have recently read this excellent article on the microservice architecture: http://www.infoq.com/articles/microservices-intro It states that when you load a web page on Amazon, then 100+ microservices cooperate to serve that page. That article describes that all communication between microservices can only go through an API. My question is why it is so bad to say that all database writes can only go through an API, but you are free to read directly from the databases of the various micro services. One could for example say that only a few database views are accessible outside the micro service so that the team maintaining the micro service know that as long as they keep these views intact then they can change the database structure of their micro service as much as they want. Am I missing something here? Is there some other reason why data should only be read via an API? Needless to say, my company is significantly smaller than Amazon (and always will be) and the maximum number of users we can ever have is about 5 million. | Databases are not very good at information hiding, which is quite plausible, because their job is to actually expose information. But this makes them a lousy tool when it comes to encapsulation. Why do you want encapsulation? Scenario: you tie a couple of components to an RDBMS directly, and you see one particular component becoming a performance bottle-neck for which you might want to denormalize the database, but you can't because all other components would be affected. You may even realize that you'd be better off with a document store or a graph database than with an RDBMS. If the data is encapsulated by a small API, you have a realistic chance to reimplement said API any way you need. You can transparently insert cache layers and what not. Fumbling with the storage layer directly from the application layer is the diametrical opposite of what the dependency inversion principle suggests to do. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263735",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5094/"
]
} |
263,812 | I am developing a web application by using ASP.NET. I would like to hire some developers to speed up my process but I am afraid what If my members take my code and go away? I am just wondering how big companies maintain their product code without falling into wrong hands. For eg: As I said you earlier I am hiring some staff to my speed up my process but you know every man is not same. Some people may work hardly and some may get the whole code and leave. So my question is how big companies like Microsoft, Facebook, Yahoo keep their code protected? Facebook currently have nearly 8,348 members (September 2014). If I say each and every person have the full Facebook code am I right or wrong? So what are the best practices should I follow if I hire staff members to speed up the process? | You're overestimating the importance of source code, and underestimating the importance of everything else in the value chain of selling software. Sure, a contractor might steal your source code. But what then? Will they be able to create a release, maintain the code further, contact your customers and sell them a knock-off for a lower price? Almost certainly not. Particularly for extremely large companies like Microsoft, making money from software involves a hell of a lot more then compiling the classes and shipping them to people for money. Nobody could possibly steal the Windows source code and proceed to put Microsoft out of business; the legal, practical and logistical hurdles are just way too high to pull that off. That leaves the fear that by reading the source, competitors will learn the clever tricks you used and gain an advantage. This, too, is almost always grossly overrated; if your ideas are any good, you will have to ram them down people's throats! Software succeeds big not because it uses clever tricks, but because it fulfills a need accurately. Successful software shops do good market research, gather good requirements, have a solid production and testing process in place and generally do things in the most predictable, easy-to-plan way. To be sure, sometimes there is a market advantage to be gained from having more brilliant engineers than the competition. Read Paul Graham's description of Viaweb one day - the competition didn't even know they were using Common Lisp! But are you really a Paul Graham? Probably not. (And Microsoft does in fact make windows source code available to many partner universities - under non-disclosure agreements, to be sure, but still, exactly to the people that might use the ideas gleaned from it to compete with Microsoft! But this never happens, because possessing someone's source code doesn't magically equate to usurping their market position.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263812",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157429/"
]
} |
263,821 | I have a method where all logic is performed inside a foreach loop that iterates over the method's parameter: public IEnumerable<TransformedNode> TransformNodes(IEnumerable<Node> nodes)
{
foreach(var node in nodes)
{
// yadda yadda yadda
yield return transformedNode;
}
} In this case, sending in an empty collection results in an empty collection, but I'm wondering if that's unwise. My logic here is that if somebody is calling this method, then they intend to pass data in, and would only pass an empty collection to my method in erroneous circumstances. Should I catch this behaviour and throw an exception for it, or is it best practice to return the empty collection? | Utility methods should not throw on empty collections. Your API clients would hate you for it. A collection can be empty; a "collection-that-must-not-be-empty" is conceptually a much more difficult thing to work with. Transforming an empty collection has an obvious outcome: the empty collection. (You may even save some garbage by returning the parameter itself.) There are many circumstances in which a module maintains lists of stuff that may or may not be already filled with something. Having to check for emptiness before each and every call to transform is annoying and has the potential to turn a simple, elegant algorithm into an ugly mess. Utility methods should always strive to be liberal in their inputs and conservative in their outputs. For all these reasons, for God's sake, handle the empty collection correctly. Nothing is more exasperating than a helper module that thinks it knows what you want better than you do. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263821",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91018/"
]
} |
263,874 | We use SonarQube to analyse our Java code and it has this rule (set to critical): Public methods should throw at most one checked exception Using checked exceptions forces method callers to deal with errors, either by propagating them or by handling them. This makes those exceptions fully part of the API of the method. To keep the complexity for callers reasonable, methods should not throw more than one kind of checked exception." Another bit in Sonar has this : Public methods should throw at most one checked exception Using checked exceptions forces method callers to deal with errors, either by propagating them or by handling them. This makes those exceptions fully part of the API of the method. To keep the complexity for callers reasonable, methods should not throw more than one kind of checked exception. The following code: public void delete() throws IOException, SQLException { // Non-Compliant
/* ... */
} should be refactored into: public void delete() throws SomeApplicationLevelException { // Compliant
/* ... */
} Overriding methods are not checked by this rule and are allowed to throw several checked exceptions. I've never come across this rule/recommendation in my readings on exception handling and have tried to find some standards, discussions etc. on the topic. The only thing I've found is this from CodeRach: How many exceptions should a method throw at most? Is this a well accepted standard? | Lets consider the situation where you have the provided code of: public void delete() throws IOException, SQLException { // Non-Compliant
/* ... */
} The danger here is that the code that you write to call delete() will look like: try {
foo.delete()
} catch (Exception e) {
/* ... */
} This is bad too. And it will be caught with another rule that flags catching the base Exception class. The key is to not write code that makes you want to write bad code elsewhere. The rule that you are encountering is a rather common one. Checkstyle has it in its design rules: ThrowsCount Restricts throws statements to a specified count (1 by default). Rationale: Exceptions form part of a method's interface. Declaring a method to throw too many differently rooted exceptions makes exception handling onerous and leads to poor programming practices such as writing code like catch(Exception ex). This check forces developers to put exceptions into a hierarchy such that in the simplest case, only one type of exception need be checked for by a caller but any subclasses can be caught specifically if necessary. This precisely describes the problem and what the issue is and why you shouldn't do it. It is a well accepted standard that many static analysis tools will identify and flag. And while you may do it according to language design, and there may be times when it is the right thing to do, it is something that you should see and immediately go "um, why am I doing this?" It may be acceptable for internal code where everyone is disciplined enough to never catch (Exception e) {} , but more often than not I've seen people cut corners especially in internal situations. Don't make people using your class want to write bad code. I should point out that the importance of this is lessened with Java SE 7 and later because a single catch statement can catch multiple exceptions ( Catching Multiple Exception Types and Rethrowing Exceptions with Improved Type Checking from Oracle). With Java 6 and before, you would have code that looked like: public void delete() throws IOException, SQLException {
/* ... */
} and try {
foo.delete()
} catch (IOException ex) {
logger.log(ex);
throw ex;
} catch (SQLException ex) {
logger.log(ex);
throw ex;
} or try {
foo.delete()
} catch (Exception ex) {
logger.log(ex);
throw ex;
} Neither of these options with Java 6 are ideal. The first approach violates DRY . Multiple blocks doing the same thing, again and again - once for each exception. You want to log the exception and rethrow it? Ok. The same lines of code for each exception. The second option is worse for several reasons. First, it means that you are catching all the exceptions. Null pointer gets caught there (and it shouldn't). Furthermore, you are rethrowing an Exception which means that the method signature would be deleteSomething() throws Exception which just makes a mess further up the stack as people using your code are now forced to catch(Exception e) . With Java 7, this isn't as important because you can instead do: catch (IOException|SQLException ex) {
logger.log(ex);
throw ex;
} Furthermore, the type checking if one does catch the types of the exceptions being thrown: public void rethrowException(String exceptionName)
throws IOException, SQLException {
try {
foo.delete();
} catch (Exception e) {
throw e;
}
} The type checker will recognize that e may only be of types IOException or SQLException . I'm still not overly enthusiastic about the use of this style, but it isn't causing as bad code as it was under Java 6 (where it would force you to have the method signature be the superclass that the exceptions extend). Despite all these changes, many static analysis tools (Sonar, PMD, Checkstyle) are still enforcing Java 6 style guides. It's not a bad thing. I tend to agree with a warning these to still be enforced, but you might change the priority on them to major or minor according to how your team prioritizes them. If exceptions should be checked or unchecked... that is a matter of g r e a t debate that one can easily find countless blog posts taking up each side of the argument. However, if you are working with checked exceptions, you probably should avoid throwing multiple types, at least under Java 6. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/263874",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/122885/"
]
} |
264,159 | There are very complex open source projects out there, and to some of them I think I could make some contributions, and I wish I could, but the barrier to entry is too high for a single reason: for changing one line of code at a big project you have to understand all of it. You don't need to read all the code (even if you read, it won't be sufficient) and understand all every single line does and why, because the code probably is modularized and compartimentized, so there are abstractions in place, but even then you need to get an overview of the project so you can know where are the modules, where does one module interface with other, what exactly each module do and why , and in which directories and files are each of these things happening. I'm calling this code overview , as the name of a section that open source projects could have in the website or documentation explaining their code to outsiders. I think it would benefit potential contributors , as they would be able to identify places where they could build, the actual primary coders involved, as they would be able to, while writing everything, reorganize their minds, and would help users , as they would be help to understand and better report bugs they experience and maybe even become contributors. But still I have never seen one of these "code overviews". Why? Are there things like these and I'm missing them? Things that do the same job as I am describing? Or is this a completely useless idea, as everybody, except for me, can understand projects with thousands lines of code easily? | Because it's extra effort to create and maintain such a document, and too many people don't understand the associated benefits. Many programmers aren't good technical writers (although many are); they rarely write documents strictly for human consumption, therefore they don't have practice and don't like doing it. Writing a code overview takes time that you can't spend on coding, and the initial benefit to a project is always greater if you can say "We support all three encoding variants" rather than "We have really neat explanations of our code!" The notion that such a document will attract more developers so that in the long run more code will get written isn't exactly foreign to them, but it's perceived as an uncertain gamble; will this text really make the difference between snagging a collaborator or not? If I keep coding right now , we will certainly get this thing done. A code overview document can also make people feel defensive; it's hard to describe higher-level decisions without feeling the need to justify them, and very often people make decisions without a reason that "sounds good enough" when actually written own. There's also an effect related to the aforementioned one: since updating the text to suit the changing code causes additional effort, this can discourage sweeping changes to the code. Sometimes this stability is a good thing, but if the code really does need a mid-level rewrite, it turns into a liability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264159",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157812/"
]
} |
264,379 | When I design and create the software I work on, I typically design and create the back-end SQL tables first and then move on to the actual programming. The project I'm currently working on has me down right puzzled though. This is probably due to a lack of good, solid requirements, but there's unfortunately little I can do about that this time. It's a "just go make it happen" kind of situation.. but I digress. I'm thinking of flipping my workflow on it's head and creating the UI and data model classes first in hopes that working that out will make it clear to me what my database schema will eventually look like. Is this a good idea? I'm nervous that I'll end up with a UI and still no idea of how to structure the db. If anyone is curious, I'm using SQL Server as a backend and MS Access as a front end application. (Access isn't my choice either... so please don't hate on it too bad.) | What came first, the process, or the data used by that process? I know this is kind of a "chicken or the egg" question, but in the case of software, I believe it is the process. For instance, you can build up your data model incrementally by implementing a single use-case at a time with just in-memory persistence (or anything as easy to implement). When you feel you've implemented enough use-cases to outline the basic entities, you may replace the in-memory persistence by a real database, and then continue to refine the schema as you go forward, one use-case at a time. This takes the focus out of the database and moves it to the core of the problem: the business rules. If you start by implementing the business rules, you'll eventually find (by a process very similar to Natural Selection, by the way) which data is truly needed by the business. If you start by modeling the database, without the feedback of whether that data is truly needed (or in that format, or in that level of normalization, etc...), you'll either end up doing a lot of late adjustments in the schema (which may require heavy migration procedures, if the business is already running with it), or you'll have to implement "work-arounds" in the business rules to make up for the out-of-tune data model. TL;DR: The database depends on the the business - it is defined by them. You won't need the data unless you have a process that operates with it (a report is also a process). Implement the process first, and you'll find which data it needs. Model the data first, and you may just be able to count how many assumptions were wrong when you first modeled it. A little out of the topic but very important: the workflow I describe is often used along with very important practices such as "The simplest thing that could possibly work", test-driven development, and a focus on decoupling your architecture from the details that get in your way (hint: database). About the last one, this talk sums up the idea pretty well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264379",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/142319/"
]
} |
264,464 | I understand the differences in capacity and values they can represent but it seems as though people always use Int32 regardless of whether it is appropriate. No one ever seems to use the unsigned version ( uint ) even though a lot of the time it fits better as it describes a value that cannot be negative (perhaps to represent an ID of a database record). Also, no one ever seems to use short/Int16 regardless of the required capacity of the value. Objectively, are there cases where it's better to use uint or short/Int16 and if so, which are they? | I suspect you are referring to a perspective colored by your own experiences where you have not worked around folks who use integral types properly. This may well be a common occurrence, but it's been my experience that people commonly do use them correctly as well. The benefit is memory space and cpu time, possibly IO space as well depending on whether the types are ever sent over the wire or to a disk. Unsigned types give you compiler checks to ensure you won't do certain operations that are impossible, plus extending the available range while maintaining the smaller size for heightened performance where it may be necessary. The correct use is as you would expect - anytime you know for certain you can use them permanently (do not constrain without certainty or you will regret it later). If you're trying to represent something that could never reasonably be negative ( public uint NumberOfPeople ) use an unsigned type. If you're trying to represent something that could never reasonably be greater than 255 ( public byte DamagedToothCount ), use a byte. If you're trying to represent something that could reasonably be greater than 255, but never a significant number of thousands , use a short ( public short JimmyHoffasBankBalance ). If you're trying to represent something that could be many hundreds of thousands, millions even, but unlikely to ever reach multiple billions, use an int ( public int HoursSinceUnixEpoch ). If you know for certain this number may have an unboundedly large value or you think it may have multiple billions but you're not certain how many billions, long is your best bet. If long's not big enough you have an interesting problem and need to start looking at arbitrary precision numerics ( public long MyReallyGreatAppsUserCountThisIsNotWishfulThinkingAtAll ). This reasoning can be used throughout in choosing between signed, unsigned, and varied sizes of types et al, just think about the logical truths of the data you're representing in reality. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264464",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154465/"
]
} |
264,538 | I have an enterprise application running that uses both MySQL and MongoDB datastores. My development team all have SSH access to the machine in order to perform application releases, maintenance, etc. I recently raised a risk in the business when users started storing highly sensitive data on the application that the developers have indirect access to this data which caused a bit of a storm, so I have now been mandated with securing the data so that it is not accessible. To me this does not seem possible because if the application has access to the database then a developer with access to the machine and application source will always be able to access the data. | Security is not a magic wand you can wave at the end of a project, it needs to be considered and built in from day 1. It is not a bolt-on, it is the consistent application of a range of solutions applied iteratively and reviewed regularly as part of a whole system, which is only as strong as the weakest link. As it stands you have flagged a security concern which is a good first step. Now as a minimum you have to define:- What data you are trying to protect? Who are you trying to protect that data from? Who actually needs access to what (and when)? What is the legal/financial/business impact of that data being compromised? What is the legal/financial/business need for a person/group having access to the data? What budget is the business willing to assign to a "get secure, stay secure" strategy when it was not a business requirement previously? What access does the system need to the data? What does this process and systems this application rely on? What is done to secure those environments? Who is going to be responsible for implementing it and reviewing the whole process? Until you know all those in detail you really don't have anything to work with. That information will define what mitigations to those threats you can (and cannot) apply and why. It may be that the best thing to do is recognise that you don't have the necessary experience and that it would be best to bring in someone new with that experience. I quite often hear the response that there's no budget - if it is considered genuinely important then the budget will be found. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264538",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/158196/"
]
} |
264,563 | I am writing a parser and as a part of that, I have an Expander class that "expands" single complex statement into multiple simple statements. For example, it would expand this: x = 2 + 3 * a into: tmp1 = 3 * a
x = 2 + tmp1 Now I'm thinking about how to test this class, specifically how to Arrange the tests. I could manually create the input syntax tree: var input = new AssignStatement(
new Variable("x"),
new BinaryExpression(
new Constant(2),
BinaryOperator.Plus,
new BinaryExpression(new Constant(3), BinaryOperator.Multiply, new Variable("a")))); Or I could write it as a string and parse it: var input = new Parser().ParseStatement("x = 2 + 3 * a"); The second option is much simpler, shorter and readable. But it also introduces a dependency on Parser , which means that a bug in Parser could fail this test. So, the test would stop being a unit test of Expander , and I guess technically becomes an integration test of Parser and Expander . My question is: is it okay to rely mostly (or completely) on this kind of integration test to test this Expander class? | You're going to find yourself writing a lot more tests, of much more complicated, interesting, and useful behavior, if you can do so simply. So the option that involves var input = new Parser().ParseStatement("x = 2 + 3 * a"); is quite valid. It does depend on another component. But everything depends on dozens of other components. If you mock something to within an inch of its life, you're probably depending on a lot of mocking features and test fixtures. Developers sometimes over-focus on the purity of their unit tests , or developing unit tests and unit tests only , without any module, integration, stress or other kinds of tests. All those forms are valid and useful, and they're all the proper responsibility of developers--not just Q/A or operations personnel further down the pipeline. One approach I've used is to start with these higher level runs, then use the data produced from them to construct the long-form, lowest-common-denominator expression of the test. E.g. when you dump the data structure from the input produced above, then you can easily construct the: var input = new AssignStatement(
new Variable("x"),
new BinaryExpression(
new Constant(2),
BinaryOperator.Plus,
new BinaryExpression(new Constant(3), BinaryOperator.Multiply, new Variable("a")))); kind of test that tests at the very lowest level. That way you get a nice mix: A handful of the very most basic, primitive tests (pure unit tests), but have not spent a week writing tests at that primitive level. That gives you the time resource needed to write many more, slightly less atomic tests using the Parser as a helper. End result: More tests, more coverage, more corner and other interesting cases, better code and higher quality assurance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264563",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/213/"
]
} |
264,728 | I have a class that represents a list of people. class AddressBook
{
public:
AddressBook();
private:
std::vector<People> people;
} I want to allow clients to iterate over the vector of people. The first thought I had was simply: std::vector<People> & getPeople { return people; } However, I do not want to leak the implementation details to the client . I may want to maintain certain invariants when the vector is modified, and I lose control over these invariants when I leak the implementation. What's the best way to allow iteration without leaking the internals? | allow iteration without leaking the internals is exactly what the iterator pattern promises. Of course that is mainly theory so here is a practical example: class AddressBook
{
using peoples_t = std::vector<People>;
public:
using iterator = peoples_t::iterator;
using const_iterator = peoples_t::const_iterator;
AddressBook();
iterator begin() { return people.begin(); }
iterator end() { return people.end(); }
const_iterator begin() const { return people.begin(); }
const_iterator end() const { return people.end(); }
const_iterator cbegin() const { return people.cbegin(); }
const_iterator cend() const { return people.cend(); }
private:
peoples_t people;
}; You provide standard begin and end methods, just like sequences in the STL and implement them simply by forwarding to vector's method. This does leak some implementation detail namely that you're returning a vector iterator but no sane client should ever depend on that so it is imo not a concern. I've shown all overloads here but of course you can start by just providing the const version if clients should not be able to change any People entries. Using the standard naming has benefits: anyone reading the code immediately knows it provides 'standard' iteration and as such works with all common algorithms, range based for loops etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264728",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/120251/"
]
} |
264,736 | Usually in C, we have to tell the computer the type of data in variable declaration. E.g. in the following program, I want to print the sum of two floating point numbers X and Y. #include<stdio.h>
main()
{
float X=5.2;
float Y=5.1;
float Z;
Z=Y+X;
printf("%f",Z);
} I had to tell the compiler the type of variable X. Can't the compiler determine the type of X on its own? Yes, it can if I do this: #define X 5.2 I can now write my program without telling the compiler the type of X as: #include<stdio.h>
#define X 5.2
main()
{
float Y=5.1;
float Z;
Z=Y+X;
printf("%f",Z);
} So we see that C language has some kind of feature, using which it can determine the type of data on its own. In my case it determined that X is of type float. Why do we have to mention the type of data, when we declare something in main()? Why can't the compiler determine the data type of a variable on its own in main() as it does in #define . | You are comparing variable declarations to #define s, which is incorrect. With a #define , you create a mapping between an identifier and a snippet of source code. The C preprocessor will then literally substitute any occurrences of that identifier with the provided snippet. Writing #define FOO 40 + 2
int foos = FOO + FOO * FOO; ends up being the same thing to the compiler as writing int foos = 40 + 2 + 40 + 2 * 40 + 2; Think of it as automated copy&paste. Also, normal variables can be reassigned, while a macro created with #define can not (although you can re- #define it). The expression FOO = 7 would be a compiler error, since we can't assign to “rvalues”: 40 + 2 = 7 is illegal. So, why do we need types at all? Some languages apparently get rid of types, this is especially common in scripting languages. However, they usually have something called “dynamic typing” where variables don't have fixed types, but values have. While this is far more flexible, it's also less performant. C likes performance, so it has a very simple and efficient concept of variables: There's a stretch of memory called the “stack”. Each local variable corresponds to an area on the stack. Now the question is how many bytes long does this area have to be? In C, each type has a well-defined size which you can query via sizeof(type) . The compiler needs to know the type of each variable so that it can reserve the correct amount of space on the stack. Why don't constants created with #define need a type annotation? They are not stored on the stack. Instead, #define creates reusable snippets of source code in a slightly more maintainable manner than copy&paste. Literals in the source code such as "foo" or 42.87 are stored by the compiler either inline as special instructions, or in a separate data section of the resulting binary. However, literals do have types. A string literal is a char * . 42 is an int but can also be used for shorter types (narrowing conversion). 42.8 would be a double . If you have a literal and want it to have a different type (e.g. to make 42.8 a float , or 42 an unsigned long int ), then you can use suffixes – a letter after the literal that changes how the compiler treats that literal. In our case, we might say 42.8f or 42ul . Some languages have static typing as in C, but the type annotations are optional. Examples are ML, Haskell, Scala, C#, C++11, and Go. How does that work? Magic? No, this is called “type inference”. In C# and Go, the compiler looks at the right hand side of an assignment, and deduces the type of that. This is fairly straightforward if the right hand side is a literal such as 42ul . Then it's obvious what the type of the variable should be. Other languages also have more complex algorithms that take into account how a variable is used. E.g. if you do x/2 , then x can't be a string but must have some numeric type. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264736",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/106313/"
]
} |
264,824 | I've recently been taking a course on Software design and there was a recent discussion/recommendation about using a 'microservices' model where components of a service are separated into microservice sub-components that are as independent as possible. One part that was mentioned was instead of following the very often seen model of having a single database that all the microservices talk to, you would have a separate database running for each of the microservices. A better worded and more detailed explanation of this can be found here: http://martinfowler.com/articles/microservices.html under the section
Decentralized Data Management the most salient part saying this: Microservices prefer letting each service manage its own database,
either different instances of the same database technology, or
entirely different database systems - an approach called Polyglot
Persistence. You can use polyglot persistence in a monolith, but it
appears more frequently with microservices. Figure 4 I like this concept and, among many other things, see that as a strong improvement on maintenance and having projects with multiple people working on them. That said, I am by no means an experience software architect. Has anyone ever tried to implement it? What benefits and hurdles did you run into? | Let's talk positives and negatives of the microservice approach. First negatives. When you create microservices, you're adding inherent complexity in your code. You're adding overhead. You're making it harder to replicate the environment (eg for developers). You're making debugging intermittent problems harder. Let me illustrate a real downside. Consider hypothetically the case where you have 100 microservices called while generating a page, each of which does the right thing 99.9% of the time. But 0.05% of the time they produce wrong results. And 0.05% of the time there is a slow connection request where, say, a TCP/IP timeout is needed to connect and that takes 5 seconds. About 90.5% of the time your request works perfectly. But around 5% of the time you have wrong results and about 5% of time your page is slow. And every non-reproducible failure has a different cause. Unless you put a lot of thought around tooling for monitoring, reproducing, and so on, this is going to turn into a mess. Particularly when one microservice calls another that calls another a few layers deep. And once you have problems, it will only get worse over time. OK, this sounds like a nightmare (and more than one company has created huge problems for themselves by going down this path). Success is only possible you are clearly aware of the potential downside and consistently work to address it. So what about that monolithic approach? It turns out that a monolithic application is just as easy to modularize as microservices. And a function call is both cheaper and more reliable in practice than an RPC call. So you can develop the same thing except that it is more reliable, runs faster, and involves less code. OK, then why do companies go to the microservices approach? The answer is because as you scale, there is a limit to what you can do with a monolithic application. After so many users, so many requests, and so on, you reach a point where databases do not scale, webservers can't keep your code in memory, and so on. Furthermore microservice approaches allow for independent and incremental upgrades of your application. Therefore a microservice architecture is a solution to scaling your application. My personal rule of thumb is that going from code in a scripting language (eg Python) to optimized C++ generally can improve 1-2 orders of magnitude on both performance and memory usage. Going the other way to a distributed architecture adds a magnitude to resource requirements but lets you scale indefinitely. You can make a distributed architecture work, but doing so is harder. Therefore I would say that if you are starting a personal project, go monolithic. Learn how to do that well. Don't be distributed because (Google|eBay|Amazon|etc) are. If you land in a large company that is distributed, pay close attention to how they make it work and don't screw it up. And if you wind up having to do the transition, be very, very careful because you're doing something hard that is easy to get very, very wrong. Disclosure, I have close to 20 years of experience in companies of all sizes. And yes, I've seen both monolithic and distributed architectures up close and personal. It is based on that experience that I am telling you that a distributed microservice architecture really is something that you do because you need to, and not because it is somehow cleaner and better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264824",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/158524/"
]
} |
264,985 | If I want to save and retrieve an object, should I create another class to handle it, or would it better to do that in the class itself? Or maybe mixing both? Which is recommended according to OOD paradigm? For example Class Student
{
public string Name {set; get;}
....
public bool Save()
{
SqlConnection con = ...
// Save the class in the db
}
public bool Retrieve()
{
// search the db for the student and fill the attributes
}
public List<Student> RetrieveAllStudents()
{
// this is such a method I have most problem with it
// that an object returns an array of objects of its own class!
}
} Versus. (I know the following is recommended, however it seems to me a bit against the cohesion of Student class) Class Student { /* */ }
Class DB {
public bool AddStudent(Student s)
{
}
public Student RetrieveStudent(Criteria)
{
}
public List<Student> RetrieveAllStudents()
{
}
} How about mixing them? Class Student
{
public string Name {set; get;}
....
public bool Save()
{
/// do some business logic!
db.AddStudent(this);
}
public bool Retrieve()
{
// build the criteria
db.RetrieveStudent(criteria);
// fill the attributes
}
} | Single Responsibility Principle , Separation of Concerns and Functional Cohesion . If you read up on these concepts, the answer you get is: Separate them . A simple reason to separate the Student from the "DB" class (or StudentRepository , to follow more popular conventions) is to allow you to change your "business rules", present in the Student class, without affecting code that is responsible for persistence, and vice-versa. This kind of separation is very important, not only between business rules and persistence, but between the many concerns of your system, to allow you to make changes with minimal impact in unrelated modules (minimal because sometimes it is unavoidable). It helps to build more robust systems, that are easier to maintain and are more reliable when there are constant changes. By having business rules and persistence mixed together, either a single class as in your first example, or with DB as a dependency of Student , you're coupling two very different concerns. It may look like they belong together; they seem to be cohesive because they use the same data. But here's the thing: cohesion cannot be measured solely by the data that is shared among procedures, you must also consider the level of abstraction at which they exist. In fact, the ideal type of cohesion is described as: Functional cohesion is when parts of a module are grouped because they all contribute to a single well-defined task of the module. And clearly, performing validations about a Student while also persisting it do not form "a single well-defined task". So again, business rules and persistence mechanisms are two very different aspects of the system, which by many principles of good object oriented design, should be kept separate. I recommend reading about Clean Architecture , watching this talks about Single Responsibility Principle (where a very similar example is used), and watching this talk about Clean Architecture as well. These concepts outline the reasons behind such separations. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/264985",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/151256/"
]
} |
266,235 | Since 4.8 release, the C++ compiler GCC (the G++ part of it) is written not in C anymore, but in C++ itself. I have a hypothetical question on this. I wonder how to compile the C++ code of GCC on a new platform that has no C++ compiler yet. Of course, you could use prebuilt binaries compiled on other machines. Or you could use an older version of GCC that was written in C and compile the current version with it. However, without prebuilt binaries and just the newest version, you were stuck, right? If not, are there other implications on this situation raised by the switch from C to C++ of the GCC project? | This is actually a well-known concept called bootstrapping . Basically, there exists, somewhere, a minimal C codebase to build a version of GCC that's capable of building the current GCC codebase. Self-hosting languages have been doing things like that for decades. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/266235",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94978/"
]
} |
266,369 | int a = 1;
int b = 2;
int sum = a + b;
string expression = "Expression: " + a + " + " + b + " = " + sum;
Console.WriteLine(expression); //displays Expression 1 + 2 = 3 Should I use: string expression = "Expression: " + a + " + " + b + " = " + sum; or string expression = "Expression: " + a.ToString() + " + " + b.ToString() + " = " + result.ToString(); Is it recommended to use ToString() when concatenating string and int ? | ToString usage No, you shouldn't use ToString here. String concatenation automatically transforms non-strings into strings, which means that your two variants are nearly¹ identical: When one or both operands are of type string, the predefined addition operators concatenate the string representation of the operands. Source: C# Language Specification: Addition operator, MSDN . On the other hand, the first one (without ToString ): Is shorter to write, Is shorter to read, Is easier to maintain and: shows exactly the intention of the author: to concatenate strings. So prefer the first one. Under the hood What is also interesting is to see what happens under the hood. One of the ways to see it is to watch the IL code within LINQPad. This program: void Main()
{
var a = 3;
var b = " Hello";
var c = a + b;
Console.WriteLine(c);
} is translated to the following IL: IL_0001: ldc.i4.3
IL_0002: stloc.0 // a
IL_0003: ldstr " Hello"
IL_0008: stloc.1 // b
IL_0009: ldloc.0 // a
IL_000A: box System.Int32
IL_000F: ldloc.1 // b
IL_0010: call System.String.Concat
IL_0015: stloc.2 // c
IL_0016: ldloc.2 // c
IL_0017: call System.Console.WriteLine See that System.String.Concat ? That means that the original code can be written also like that, which translates into exactly same IL: void Main()
{
var a = 3;
var b = " Hello";
var c = string.Concat(a, b); // This is the line which was changed.
Console.WriteLine(c);
} When you read the documentation of string.Concat(object[]) , you may learn that: The method concatenates each object in args by calling the parameterless ToString method of that object; it does not add any delimiters. This means that ToString is redundant. Also: String.Empty is used in place of any null object in the array. Which handles nicely the case where some of the operands are null (see the footnote 1). While in the last example, concatenation was translated into string.Concat , one should also highlight compiler optimizations: var a = "Hello " + "World"; is translated into: ldstr "Hello World"
stloc.0 On the other hand: var a = string.Concat("Hello ", "World"); is translated into: ldstr "Hello "
ldstr "World"
call System.String.Concat
stloc.0 Other alternatives There are of course other ways to concatenate string representations of objects in C#. StringBuilder is used when you need to do a lot of concatenation operations and helps reducing the number of intermediary strings created. Deciding whether you should use a StringBuilder or an ordinary concatenation may not be easy. Use a profiler or search for relevant answers on Stack Overflow. Using StringBuilder has a major drawback of making the code difficult to read and maintain. For simple cases as the one in your question, StringBuilder is not only harmful to the readability of the code, but also useless in terms of performance. string.Join should be used when you need to add delimiters. Obviously, never use string.Join with an empty delimiter to concatenate strings. string.Format can be used when string templating is preferred to string concatenation. One of the cases where you may prefer that is when the message may be localized, as suggested in the answer by kunthet. Using string.Format has several drawbacks which makes it unsuited for simple cases like yours: With simple "{0}" placeholders, it is often unclear which parameter goes where. It is frequent to mistakenly reverse the parameters or to forget one. Luckily, C# 6 finally introduces string interpolation which solves this problem. Runtime performance may degrade. Of course, don't assume string.Format is always slower. If performance matters, measure two approaches and determine which one is faster based on your actual results instead of assumptions. The code is slightly longer to write, longer to read and harder to maintain, although this is extremely minor and shouldn't bother you too much. ¹ The difference appears when one of the objects is null . Without ToString , a null is replaced by an empty string. With ToString , a NullReferenceException is thrown. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/266369",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/160173/"
]
} |
266,574 | I suspect that I'm focusing on the wrong problem so I will first describe what I think the problem is before presenting the possibly suboptimal solution I envision. Current Situation: Currently my co-workers commit their code changes only after loooong periods of time in huge chunks with changes spreading all over the project(s). That's progress I guess, because not long ago they just put .zip archives on some network share. Still, merging is a nightmare - and frankly I've had enough of it. And I'm also tired of talking and explaining and begging. This just has to stop - without me constantly being "the bad guy". My solution: Since there seems to be no awareness of and/or no interest in the problems and I cannot expect any efforts to last longer than a few days ...strike that, hours, I'd like the subversion server to do the nagging. My question: Am I way off-base here or am I looking at the wrong problem? It seems like I'm missing something, and I think I'm asking the wrong thing by looking at tools to solve my problem. Should I be looking for a tool to solve this problem or what is it I should do to fix this? | You're looking for a technical solution to a human problem. That rarely works. The reason for that is because if team members do not accept something (nor understand the implications), instead of following the rules, they'll attempt to circumvent them. That's exactly why, for example, developers should accept and understand style rules instead of just being forced to comply by a checker. Here are a few approaches I either used in the past or have in mind without actually having an opportunity to them in practice. Some may not apply to your case, depending on the position you have in a team (if you're a team lead with excellent reputation, chances are you'll have a better opportunity to enforce your view than if you're an undergraduate who just joined a team for the duration of your internship). Discuss the issue with your coworkers and explain the consequences of large commits. Maybe they simply don't understand that complicated merges are a direct consequence of rare commits, and than small, frequent commits will make merges (relatively) easy. I knew a lot of programmers who were simply convinced that merges are always complicated. They did at most one commit per day, avoided to use powerful tools such as Visual Studio's diff and auto-merge, and had a poor practice of manual merging (unless “Take mine” with no further diff inspection is actually a good practice). For them, this had nothing to do with them , and was the inherent nature of a merge. Give concrete examples of what is happening in other companies (especially the ones your coworkers have a deep respect for). They may simply be unaware that it is an issue, and be convinced that a maximum one commit per day is what every team does. Some people are unaware that there are teams of 5-10 members who make up to 50 pushes to production, which translates into an average of 5-10 commits per day per person. They may not understand neither how is it possible, nor why would anyone do it. Lead by example. Do enough small commits yourself. If possible, do a short presentation showing their and your merges side by side over a week (I'm not sure if extracting this sort of information is easy from a version control). Emphasize on the eventual mistakes they've done during their merges and compare it to the number of errors you've done (which should be close to zero). Use the “I told you” technique, when appropriate . When you see your colleagues suffer over a painful merge, loudly comment that small, frequent commits could make merges (relatively) painless. Explain that there is no minimum duration to make a commit. A commit may even correspond to a minor change made in a few seconds. Renaming a file, removing an obsolete comment, correcting a typo are all tasks which can be committed immediately. Programmers shouldn't fear doing a tiny commit, but rather aggregating many changes into one huge commit. Work with individuals instead of a team, when appropriate. If there is a person who particularly refuses to do frequent, small commits, talk with this person individually to see why is he refusing it. They may give perfectly valid reasons which may give you the hint about what's going on with a team. Some reasons I've heard myself: “My teacher/mentor told me that the best practice is to do one commit per day.” It doesn't surprise me, given what I had to hear from my teachers in college . “My colleagues told me that I should make less commits.” I've been told that too in some teams, and I understand their point. We had a log which was practically filled with my commits (not hard to do when four teammates don't even do one commit per day), which frustrated my coworkers. “I thought small commits make it difficult to find a revision.” Somehow valid point, even when the team puts an effort into writing descriptive log messages. “I don't want to waste too much space on our version control server.” The person obviously doesn't understand how commits are stored (nor how cheap the storage space is). “I think a commit should correspond to a specific task.” Given that often, a task corresponds to some work to be done in one day (such as in visual management's task boards), this is not a coincidence. The person should then learn to make a difference between a task in a backlog (2 to 8 hours of work) and a logically isolated change which should be committed (a few seconds to a few hours of work). This is also related to point 5. Search for the reason the team is not doing commits more frequently. You may be surprised by the results. Recently, I mentioned in a different answer that speed of a commit matters, and even hundreds of milliseconds may push developers to commit on less frequent basis. Other reasons may include: Overly complicated rules to write a commit message. The rule which forces the developer to link the commit to a task from a bug tracking system. The fear of breaking the build. The unwillingness to deal with a risk of breaking the build right now: if you do a commit on Friday evening just before leaving, you can postpone dealing with broken build until Monday. The fear of doing a merge. Determine whether developers understand that there are other benefits to commit often . For example, a Continuous Integration platform is a huge incentive to have frequent commits , since it allows to pinpoint precisely where the regression was introduced . I would rather prefer CI platform telling me that I broke the build in the revision 5023 which consists of a change in two methods in one file I did fifteen minutes ago, rather than in the revision 5023 consisting of changes which span four dozen files and represent 13 hours of work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/266574",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/160432/"
]
} |
266,592 | Recently, I came across a problem that required me to define the logical "OR" operator programmatically, but without using the operator itself. What I came up with is this: OR(arg1, arg2)
if arg1 = True and arg2 = True
return True
else if arg1 = True and arg2 = False
return True
else if arg1 = False and arg2 = True
return True
else:
return False Is this logic correct, or did I miss something? | I'd say that's correct, but could you not condense it down to something such as this? or(arg1, arg2)
if arg1 == true
return true
if arg2 == true
return true
return false Since you're doing an or comparison, I don't think you really need to check the combination. It just matters if one of them is true to return true. Otherwise we want to return false. If you're looking for a shorter version that is less verbose, this will also work: or(arg1, arg2)
if arg1
return arg1
return arg2 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/266592",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/160470/"
]
} |
266,672 | Is it better to use List<string> in type annotations or StringList where StringList class StringList : List<String> { /* no further code!*/ } I ran into several of these in Irony . | There is another reason why you may want to inherit from a generic type. Microsoft recommend avoiding nesting generic types in method signatures . It is a good idea then, to create business-domain named types for some of your generics. Instead of having a IEnumerable<IDictionary<string, MyClass>> , create a type MyDictionary : IDictionary<string, MyClass> and then your member becomes an IEnumerable<MyDictionary> , which is much more readable. It also allows you to use MyDictionary in a number of places, increasing the explicitness of your code. This may not sound like a huge benefit, but in real-world business code I've seen generics that will make your hair stand on end. Things like List<Tuple<string, Dictionary<int, List<Dictionary<string, List<double>>>>> . The meaning of that generic is in no way obvious. Yet, if it were constructed with a series of explicitly created types the intention of the original developer would likely be a lot clearer. Further still, if that generic type needed to change for some reason, finding and replacing all instances of that generic type might be a real pain, compared to changing it in the derived class. Finally, there is another reason why someone might wish to create their own types derived from generics. Consider the following classes: public class MyClass
{
public ICollection<MyItems> MyItems { get; private set; }
// plumbing code here
}
public class MyOtherClass
{
public ICollection<MyItems> MyItemCache { get; private set; }
// plumbing code here
}
public class MyConsumerClass
{
public MyConsumerClass(ICollection<MyItems> myItems)
{
// use the collection
}
} Now how do we know which ICollection<MyItems> to pass into the constructor for MyConsumerClass? If we create derived classes class MyItems : ICollection<MyItems> and class MyItemCache : ICollection<MyItems> , MyConsumerClass can then be extremely specific about which of the two collections it actually wants. This then works very nicely with an IoC container, which can resolve the two collections very easily. It also allows the programmer to be more accurate - if it makes sense for MyConsumerClass to work with any ICollection, they can take the generic constructor. If, however, it never makes business sense to use one of the two derived types, the developer can restrict the constructor parameter to the specific type. In summary, it is often worthwhile to derive types from generic types - it allows for more explicit code where appropriate and helps readability and maintainability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/266672",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15986/"
]
} |
266,717 | On a recent project, I needed to convert from bytes to kilobytes kibibyte . The code was straightforward enough: var kBval = byteVal / 1024; After writing that, I got the rest of the function working & moved on. But later on, I started to wonder if I had just embedded a magic number within my code. Part of me says it was fine because the number is a fixed constant and should be readily understood. But another part of me thinks it would have been super clear if wrapped in a defined constant like BYTES_PER_KBYTE . So are numbers that are well known constants really all that magical or not? Related questions: When is a number a magic number? and Is every number in the code considered a "magic number"? - are similar, but are much broader questions than what I'm asking. My question is focused on well-known constant numbers which is not addressed in those questions. Eliminating Magic Numbers: When is it time to say "No"? is also related, but is focused on refactoring as opposed to whether or not a constant number is a magic number. | Not all magic numbers are the same. I think in that instance, that constant is OK. The problem with magic numbers is when they are magic, i.e. it is unclear what their origin is, why the value is what it is, or whether the value is correct or not. Hiding 1024 behind BYTES_PER_KBYTE also means you don't see instantly if it is correct or not. I would expect anyone to know immediately why the value is 1024. On the other hand, if you were converting bytes to megabytes, I would define the constant BYTES_PER_MBYTE or similar because the constant 1,048,576 isn't so obvious that its 1024^2, or that it's even correct. The same goes for values that are dictated by requirements or standards, that are only used in one place. I find just putting the constant right in place with a comment to the relevant source to be easier to deal with than defining it elsewhere and having to chase both parts down, e.g.: // Value must be less than 3.5 volts according to spec blah.
SomeTest = DataSample < 3.50 I find better than SomeTest = DataSample < SOME_THRESHOLD_VALUE Only when SOME_THRESHOLD_VALUE is used in multiple places does the tradeoff become worth it to define a constant, in my opinion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/266717",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
266,938 | I noticed that nearly every time I see programmers using static classes in object oriented languages such as C#, they are doing it wrong. The major problems are obviously the global state and the difficulty to swap implementations at runtime or with mocks/stubs during tests. By curiosity, I've looked at a few of my projects selecting ones which are well tested and when I made an effort to actually think about architecture and design. The two usages of static classes I've found are: The utility class—something I would rather avoid if I was writing this code today, The application cache: using static class for that is plainly wrong. What if I want to replace it by MemoryCache or Redis? Looking at .NET Framework, I don't see any example of a valid usage of static classes either. For example: File static class makes it really painful to switch to alternatives. For example, what if one needs to switch to Isolated Storage or store files directly in memory or needs a provider which can support NTFS transactions or at least be able to deal with paths longer than 259 characters ? Having a common interface and multiple implementations appears as easy as using File directly, with the benefit of not having to rewrite most of the code base when requirements change. Console static class makes testing overly complicated. How should I ensure within a unit test that a method outputs a correct data to console? Should I modify the method to send the output to a writeable Stream which is injected at run time? Seems that a non-static console class would be as simple as a static one, just without all the drawbacks of a static class, once again. Math static class is not a good candidate either; what if I need to switch to arbitrary precision arithmetic? Or to some newer, faster implementation? Or to a stub which will tell, during a unit test, that a given method of a Math class was indeed called during a test? On Programmer.SE, I've read: Don't Use “Static” in C#? Some answers are quite against static classes. Others assert that “Static methods are fine to use and have a rightful place in programming.”, but don't bake it with arguments. When to use a Singleton and when to use a static class . Utility classes are mentioned, given that I mentioned above that utility classes are problematic. Why and when should I make a class 'static'? What is the purpose of 'static' keyword on classes? The only valid usage which is mentioned is a container for extension methods. Great. Aside extension methods, what are the valid uses of static classes, i.e. cases where Dependency Injection or singletons are either impossible or will result in a lower quality design and harder extensibility and maintenance? | what if; what if; what if? YAGNI Seriously. If someone wants to use different implementations of File or Math or Console then they can go through the pain of abstracting that away. Have you seen/used Java?!? The Java standard libraries are a fine example of what happens when you abstract things for the sake of abstraction. Do I really need to go through 3 steps to build my Calendar object? All that said, I'm not going to sit here and defend static classes strongly. Static state is solidly evil (even if I still use it occasionally to pool/cache things). Static functions can be evil due to concurrency and testability issues. But pure static functions are not that terrible. There are elementary things that don't make sense to abstract out since there's no sane alternative for the operation. There are implementations of a strategy that can be static because they're already abstracted via the delegate/functor. And sometimes these functions don't have an instance class to be associated with, so static classes are fine to help organize them. Also, extension methods are a practical use, even if they're not philosophically static classes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/266938",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
267,066 | Being either static or dynamic should be something completely supported in the language. Static vs dynamic shouldn't be something that you have to turn on and off, switching between basically 2 languages. I'm talking full blown mixed typing madness like this thing: class MyClass {
function myMethod(x: int, y: int){ # one parameter is typed, the other not
return <cast_to_int> y.return_integer(x); #casting to guarantee the contract
}
}
MyClass static_obj = MyClass(); #statically typed instantiation;
var dynamic_obj = {}; #dynamically typed instantiation
dynamic_obj.return_integer = (function (x) {return x*2}) #dynamic method creation
static_obj.myMethod(3, dynamic_obj) # This returns 6
dynamic_obj.return_integer = (function (x) {return x+1}); # dynamic member reassignment
static_obj.myMythod(3, dynamic_obj) # this returns 4
static_obj.myMethod('asdf', dynamic_obj) # This can't compile, because of typing
# and also this crazy thing; Notice that the static interface of the method is respected
if (<user input> == 2){
static_obj.myMethod = (function (x: int, y){return x + y.yet_some_other_method(x);})
}
# but when the static contract is not respected, it doesn't compile
static_obj.myMethod = (function (a, b, c) {return 'asdf'}) # Compile time error I'm interested in this topic because I really like that API discoverability in static languages is great (thing i liked about Java, C# and ABAP for a while), but I also like testing stuff at run time, monkey patching, REPLs, reassignment, and crazy reflection magic (which I liked about JavaScript and Python). I'm aware of Dart's static type checker, but that's not enough, because it provides no actual static guarantees. I'm also not interested in type inferrence. I want true static running next to true dynamic behavior. I also am aware that I can use mapping types like python's dict , or any JS object, or Java's HashMap) to pass around data (or even functions), but Java's mappings can't carry methods in them, Python's dicts can't (out of the box) have methods in them, and even though JS simply laughs at any kind of static restrictions - adding methods at runtime that DO get access via the this keyword to the context object is something really cool (I know Python does it too, when adding functions on the class object, but NOT on class instances). My curiosity was also sparked by this paper: Static typing where possible, dynamic typing where needed And the API discoverability argument came from my own experience, but this paper seems to support it: http://personales.dcc.uchile.cl/~rrobbes/p/ICPC2014-idetypes.pdf | Yes. C# as of 4.0 is probably the most common example, with the dynamic keyword. There are likely other good examples that I am forgetting or don't know about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267066",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46878/"
]
} |
267,086 | Sometimes while programming in different languages (C/C++, C#), this thought comes to my mind: Is each and every language written in the C programming language? Is the C language the mother/father of all languages? Is each concept ( OOP , etc.) all implemented in C? Am I in right direction? | No. OCaml, Haskell, Lisp dialects like Scheme, and several other languages are often used in the development of hobby languages. Many languages are implemented in C because it's a ubiquitous language, and compiler-writing tools like lexer-parser generators (such as yacc and bison) are well-understood and almost as ubiquitous. But C itself couldn't originally be developed in C when it was first created. It was, in fact, originally developed using the B language. Earlier languages (like Fortran) were usually bootstrapped using a native assembly language or even machine code long before C ever existed. Unrelatedly, language paradigms like OOP are generally language-agnostic. The functional paradigm, for example, was developed (by Alonzo Church) as a foundation of mathematics long before any programming language ever existed. The procedural and structured programming paradigms came out of the mathematical work of theorists like John von Neumann. Object-orientation was developed by several different and unrelated efforts, some out of the lambda calculus (the functional paradigm) and some out of dynamic programming systems like SmallTalk at Xerox PARC by Alan Kay. C is merely a tiny part of the story, decades after these ideas came into being. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267086",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161073/"
]
} |
267,170 | Recently I was assigned a task of creating a calculator with functions addition, subtraction, multiplication, division and power using Object Oriented Programming . I successfully completed this task. However afterwards I reprogrammed the whole program without using Object Oriented technique/method . The thing I noticed that my code length was reduced quite significantly and it was quite understandable. So my question here is when should I use Object Oriented Programming? Is it okay to create programs without Object Orientation in Object Oriented Language? Kindly help me out on this one. PS: I am not asking here to tell me merits of Object Oriented Programming over procedural programming. | C++ is not "just" an OO language, it is a multi-paradigm language. So it allows you to decide for or against OO programming, or to mix them both. Using OO techniques adds more structure to your program - from which you will benefit when your program reaches a certain size or complexity. This additional structure, however, comes for the price of additional lines of code. So for "small" programs the best way to go is often to implement them in a non-OO style first, and add more and more OO techniques later when the program starts to grow over the years (which will probably not happen to "exercise" programs, but is the typical life cycle for lots of business programs). The tricky part is not to miss the point in time when the complexity of your program reaches the size which demands more structure. Otherwise you will end up with a huge pile of spaghetti code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267170",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161073/"
]
} |
267,193 | Consider the following "C" code: #include<stdio.h>
main()
{
printf("func:%d",Func_i());
}
Func_i()
{
int i=3;
return i;
} Func_i() is defined at the end of the source code and no declaration is provide before its use in main() . At the very time when the compiler sees Func_i() in main() , it comes out of the main() and finds out Func_i() . The compiler somehow finds the value returned by Func_i() and gives it to printf() . I also know that the compiler cannot find the return type of Func_i() . It, by default takes(guesses?) the return type of Func_i() to be int . That is if the code had float Func_i() then the compiler would give the error: Conflicting types for Func_i() . From the above discussion we see that: The compiler can find the value returned by Func_i() . If the compiler can find the value returned by Func_i() by coming out of the main() and searching down the source code, then why can't it find the type of Func_i(), which is explicitly mentioned. The compiler must know that Func_i() is of type float--that's why it gives the error of conflicting types. If the compiler knows that Func_i is of type float, then why does it still assume Func_i() to be of type int, and gives the error of conflicting types? Why don't it forcefully make Func_i() to be of type float. I've the same doubt with the variable declaration . Consider the following "C" code: #include<stdio.h>
main()
{
/* [extern int Data_i;]--omitted the declaration */
printf("func:%d and Var:%d",Func_i(),Data_i);
}
Func_i()
{
int i=3;
return i;
}
int Data_i=4; The compiler gives the error: 'Data_i' undeclared(first use in this function). When the compiler sees Func_i() , it goes down to the source code to find the value returned by Func_(). Why can't the compiler do the same for the variable Data_i? Edit: I don't know the details of the inner working of compiler, assembler, processor etc. The basic idea of my question is that if I tell(write) the return-value of the function in the source code at last, after the use of that function then the "C" language allows the computer to find that value without giving any error. Now why can't the computer find the type similarly. Why can't the type of Data_i be found as Func_i()'s return value was found. Even if I use the extern data-type identifier; statement, I am not telling the value to be returned by that identifier(function/variable). If the computer can find that value then why can't it find the type. Why do we need the forward declaration at all? Thank you. | Because C is a single-pass , statically-typed , weakly-typed , compiled language. Single-pass means the compiler does not look ahead to see the definition of a function or variable. Since the compiler does not look ahead, the declaration of a function must come before the use of the function, otherwise the compiler does not know what its type signature is. However, the definition of the function can be later on in the same file, or even in a different file altogether. See point #4. The only exception is the historical artifact that undeclared functions and variables are presumed to be of type "int". Modern practice is to avoid implicit typing by always declaring functions and variables explicitly. Statically-typed means that all type information is computed at compile time. That information is then used to generate machine code that executes at run time. There is no concept in C of run-time typing. Once an int, always an int, once a float, always a float. However, that fact is somewhat obscured by the next point. Weakly-typed means that the C compiler automatically generates code to convert between numeric types without requiring the programmer to explicitly specify the conversion operations. Because of static typing, the same conversion will always be carried out in the same way each time through the program. If a float value is converted to an int value at a given spot in the code, a float value will always be converted to an int value at that spot in the code. This cannot be changed at run-time. The value itself may change from one execution of the program to the next, of course, and conditional statements may change which sections of code are run in what order, but a given single section of code without function calls or conditionals will always perform the exact same operations whenever it is run. Compiled means that the process of analyzing the human-readable source code and transforming it into machine-readable instructions is fully carried out before the program runs. When the compiler is compiling a function, it has no knowledge of what it will encounter further down in a given source file. However, once compilation (and assembly, linking, etc) have completed, each function in the finished executable contains numeric pointers to the functions that it will call when it is run. That is why main() can call a function further down in the source file. By the time main() is actually run, it will contain a pointer to the address of Func_i(). Machine code is very, very specific. The code for adding two integers (3 + 2) is different from the one for adding two floats (3.0 + 2.0). Those are both different from adding an int to a float (3 + 2.0), and so on. The compiler determines for every point in a function what exact operation needs to be carried out at that point, and generates code that carries out that exact operation. Once that has been done, it cannot be changed without recompiling the function. Putting all these concepts together, the reason that main() cannot "see" further down to determine the type of Func_i() is that type analysis occurs at the very beginning of the compilation process. At that point, only the part of the source file up to the definition of main() has been read and analyzed, and the definition of Func_i() is not yet known to the compiler. The reason that main() can "see" where Func_i() is to call it is that calling happens at run time, after compilation has already resolved all of the names and types of all of the identifiers, assembly has already converted all of the functions to machine code, and linking has already inserted the correct address of each function in each place it is called. I have, of course, left out most of the gory details. The actual process is much, much more complicated. I hope that I have provided enough of a high-level overview to answer your questions. Additionally, please remember, what I have written above specifically applies to C. In other languages, the compiler may make multiple passes through the source code, and so the compiler could pick up the definition of Func_i() without it being predeclared. In other languages, functions and / or variables may be dynamically typed, so a single variable could hold, or a single function could be passed or return, an integer, a float, a string, an array, or an object at different times. In other languages, typing may be stronger, requiring conversion from floating-point to integer to be explicitly specified. In yet other languages, typing may be weaker, allowing conversion from the string "3.0" to the float 3.0 to the integer 3 to be carried out automatically. And in other languages, code may be interpreted one line at a time, or compiled to byte-code and then interpreted, or just-in-time compiled, or put through a wide variety of other execution schemes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267193",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/106313/"
]
} |
267,406 | Given are two sorted arrays a , b of type T with size n and m . I am looking for an algorithm that merges the two arrays into a new array (of maximum size n+m). If you have a cheap comparison operation, this is pretty simple. Just take from the array with the lowest first element until one or both arrays are completely traversed, then add the remaining elements. Something like this https://stackoverflow.com/questions/5958169/how-to-merge-two-sorted-arrays-into-a-sorted-array However, the situation changes when comparing two elements is much more expensive than copying an element from the source array to the target array . For example you might have an array of large arbitrary precision integers, or strings, where a comparison can be quite expensive. Just assume that creating arrays and copying elements is free, and the only thing that costs is comparing elements. In this case, you want to merge the two arrays with a minimum number of element comparisons . Here are some examples where you should be able to do much better than the simple merge algorithm: a = [1,2,3,4, ... 1000]
b = [1001,1002,1003,1004, ... 2000] Or a = [1,2,3,4, ... 1000]
b = [0,100,200, ... 1000] There are some cases where the simple merge algorithm will be optimal, like a = [1,3,5,7,9,....,999]
b = [2,4,6,8,10,....,1000] So the algorithm should ideally gracefully degrade and perform a maximum of n+m-1 comparisons in case the arrays are interleaved, or at least not be significantly worse. One thing that should do pretty well for lists with a large size difference would be to use binary search to insert the elements of the smaller array into the larger array. But that won't degrade gracefully in case both lists are of the same size and interleaved. The only thing available for the elements is a (total) ordering function, so any scheme that makes comparisons cheaper is not possible. Any ideas? I have come up with this bit in Scala . I believe it is optimal regarding the number of comparisons, but it is beyond my ability to prove it. At least it is much simpler than the things I have found in the literature. And since the original posting, I wrote a blog post about how this works. | The normal merge sort algorithm - merge step with normally apply n + m -1 comparisons, where one list is of size n and and the other list is of size m. Using this algorithm is the most simplest approach to combine two sorted lists. If the comparisons are too expensive you could do two things - either you minimize the number of comparisons or you minimize the cost of comparisons. Let's focus on the minimization of the cost of comparison. You and only you can decide whether the data you are comparing can be quantized or not. If you can quantize them, which is a form of implementing a hash method, which is keeping the ordering. E.g. if your Data is compared by Name, Then the first tname, ... you can take the first to Chars of the name "Klaehn, Ruediger" and reduce/quantize your data element to "Kl.Ru", if you compare it to "Packer, The" you preserve the ordering "Pa.Th" - you can now apply a cheaper comparison algorithm, comparing the reduced values. But if you find another "Kl.Ru", you now have a near value, and you might now switch to a more expensive approach comparing these elements. If you can extract this quantized value from your data, faster than comparing it, this is the first thing you do, you compare the quantized or hashed value first. Please keep in mind, that this value needs to be computed only once, so you can compute it on creating the data element. I also mentioned another way, to minimize your comparisons. I had a look into the classic book TAOCP- Volume 3-Sorting and Searching, (pp.197-207, section 5.3.2) which has full 10 pages on this topic. I found two references to algorithms which are faster than n+m-1 comparisons. First there is the Hwang-Lin merge algorithm and the second an improvement by Glenn K Manacher - both are cited by TAOCP as well as an algorithm by Christen, which approaches the lower bound of needed comparisons, on special conditions on the length n and m of the lists. The algorithm of Manacher was presented in Journal of the ACM Vol. 26 Number 3 on pages 434-440: "Significant Improvements to the "Hwan-Lin" Merging Algorithm". the list with m items and the list with n items can be of different length, but they must also be odered by the numbers of elements they contain m<=n The Hwang-Lin algorithm breaks the lists to merge, apart to smaller lists and sorts the lists by comparing the first element of each sublist, and to decide whether some elements in the sublist need to be compared or not. If the first list is smaller than the second list, then the chance is high, that consecutive elements of the longer list can be transferred into the resulting list without comparison. If the first element of the small ist is greater than the first element of the splitted larger list, all elements in front of sublist can be copied without comparison. Average case analysis of the merging alorithm of Hwang and Lin (Vega, Frieze, Santha) in Section 2 you can find a pseudocode of the HL-Algorithm. Which is a lot better than my description. And you can see why there are fewer comparisons - the algorithm uses a binary search, to find the index, where to insert the element from the shorter list. If the lists are not interleaved like in your last example, you should have a remaining smaller and a remaining larger list in most cases. This is when the the HL-algorithm starts to perform better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267406",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161492/"
]
} |
267,420 | In my school management system I have this partial class diagram: 1- In one use case student want to see his schedule of classes in week. this is what i suppose to do: get student object from session and call method getSchedule() on that and it will give me schedule(this method calls StudentCourse , Course for getting schedule). Is this good place to put getSchedule() method or I should place it elsewhere? .................................................. 2- In another use class student's parent want to see his child class schedule, I plan to do something like this: Because I have some use cases that parent want to see some other student information(course mark...), I create interface StudentParentInterface that have method getSchedule() and other methods and student implement StudentParentInterface , then parent has reference to StudentParentInterface not student obj directly. parent call getSchedule() method on StudentParentInterface , is this correct? | The normal merge sort algorithm - merge step with normally apply n + m -1 comparisons, where one list is of size n and and the other list is of size m. Using this algorithm is the most simplest approach to combine two sorted lists. If the comparisons are too expensive you could do two things - either you minimize the number of comparisons or you minimize the cost of comparisons. Let's focus on the minimization of the cost of comparison. You and only you can decide whether the data you are comparing can be quantized or not. If you can quantize them, which is a form of implementing a hash method, which is keeping the ordering. E.g. if your Data is compared by Name, Then the first tname, ... you can take the first to Chars of the name "Klaehn, Ruediger" and reduce/quantize your data element to "Kl.Ru", if you compare it to "Packer, The" you preserve the ordering "Pa.Th" - you can now apply a cheaper comparison algorithm, comparing the reduced values. But if you find another "Kl.Ru", you now have a near value, and you might now switch to a more expensive approach comparing these elements. If you can extract this quantized value from your data, faster than comparing it, this is the first thing you do, you compare the quantized or hashed value first. Please keep in mind, that this value needs to be computed only once, so you can compute it on creating the data element. I also mentioned another way, to minimize your comparisons. I had a look into the classic book TAOCP- Volume 3-Sorting and Searching, (pp.197-207, section 5.3.2) which has full 10 pages on this topic. I found two references to algorithms which are faster than n+m-1 comparisons. First there is the Hwang-Lin merge algorithm and the second an improvement by Glenn K Manacher - both are cited by TAOCP as well as an algorithm by Christen, which approaches the lower bound of needed comparisons, on special conditions on the length n and m of the lists. The algorithm of Manacher was presented in Journal of the ACM Vol. 26 Number 3 on pages 434-440: "Significant Improvements to the "Hwan-Lin" Merging Algorithm". the list with m items and the list with n items can be of different length, but they must also be odered by the numbers of elements they contain m<=n The Hwang-Lin algorithm breaks the lists to merge, apart to smaller lists and sorts the lists by comparing the first element of each sublist, and to decide whether some elements in the sublist need to be compared or not. If the first list is smaller than the second list, then the chance is high, that consecutive elements of the longer list can be transferred into the resulting list without comparison. If the first element of the small ist is greater than the first element of the splitted larger list, all elements in front of sublist can be copied without comparison. Average case analysis of the merging alorithm of Hwang and Lin (Vega, Frieze, Santha) in Section 2 you can find a pseudocode of the HL-Algorithm. Which is a lot better than my description. And you can see why there are fewer comparisons - the algorithm uses a binary search, to find the index, where to insert the element from the shorter list. If the lists are not interleaved like in your last example, you should have a remaining smaller and a remaining larger list in most cases. This is when the the HL-algorithm starts to perform better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267420",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/150418/"
]
} |
267,576 | My question is related to MVC design pattern and Razor Syntax introduced by Microsoft. While learning MVC design pattern I was told that the idea is based upon a principle known as Separation of Concerns . But Razor Syntax allows us to use C# in Views directly. Isn't this intersection of concerns? | You are conflating the Razor syntax with separation of concerns. Separation of concerns has to do with how you structure your code. Being able to use C# in views doesn't prevent that. It has nothing to do with separation of concerns as such. Sure, you can structure the code in your view to not comply with separation of concerns, but what about C# code that is used for display purposes only? Where would that live? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267576",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161664/"
]
} |
267,683 | It's drilled into the newbie Java programmers that Java ( pre-Java 8 ) has no multiple class inheritance, and only multiple interface inheritance, because otherwise you run into diamond inheritance problem ( Class A inherits from classes B and C, both of which implement method X. So which of those classes' implementation is used when you do a.X() call? ) Clearly, C++ successfully addressed this, since it has multiple inheritance. What's different between the internal design of Java and C++ - (i'm guessing in the method dispatch methodology, and vaguely recall that it had to do with vtables from my long-ago computer languages class) that allows this problem to be solved in C++ but prevents in Java? To rephrase: What is the methodology of the language design/implementation used in C++ to avoid the ambiguity of diamond inheritance problem, and why couldn't the same methodology be used in Java ? | There's nothing fundamental in the internal design which causes this. The lack of multiple inheritance is a deliberate design decision in Java, not an external manifestation of a shortcoming in the internal design. (I'm deliberately avoiding getting into the flame war as to whether multiple inheritance is a good idea or not). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267683",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12402/"
]
} |
267,752 | I have been passing callbacks or just triggering the functions from other function in my programs to make things happen once tasks complete. When something finishes, I trigger the function directly: var ground = 'clean';
function shovelSnow(){
console.log("Cleaning Snow");
ground = 'clean';
}
function makeItSnow(){
console.log("It's snowing");
ground = 'snowy';
shovelSnow();
} But I've read about many different strategies in programming, and one that I understand to be powerful, but have not yet practiced, is event-based (I think a method I read about was called "pub-sub" ): var ground = 'clean';
function shovelSnow(){
console.log("Cleaning Snow");
ground = 'clean';
}
function makeItSnow(){
console.log("It's snowing");
ground = 'snowy';
$(document).trigger('snow');
}
$(document).bind('snow', shovelSnow); I'd like to understand the objective strengths and weaknesses of event-based programming, vs just calling all of your functions from within other functions. In which programming situations does event-based programming make sense to use? | An event is a notification describing an occurrence from the recent past. A typical implementation of a event-driven system utilises an event dispatcher and handler functions (or subscribers ). The dispatcher provides an API to wire handlers up to events (jQuery's bind ), and a method to publish an event to its subscribers ( trigger in jQuery). When you're talking about IO or UI events, there's also usually an event loop , which detects new events such as mouse-clicks and passes them to the dispatcher. In JS-land, the dispatcher and event loop are provided by the browser. For code that interacts directly with the user - responding to keypresses and clicks - event-driven programming (or a variation thereof, such as functional reactive programming ) is almost unavoidable. You, the programmer, have no idea when or where the user is going to click, so it's down to the GUI framework or browser to detect the user's action in its event loop and notify your code. This type of infrastructure is also used in networking applications (cf NodeJS). Your example, wherein you raise an event in your code rather than calling a function directly has some more interesting tradeoffs, which I will discuss below. The main difference is that the publisher of an event ( makeItSnow ) does not specify the receiver of the call; that's wired up elsewhere (in the call to bind in your example). This is called fire-and-forget : makeItSnow announces to the world that it's snowing, but it doesn't care who's listening, what happens next, or when it happens - it simply broadcasts the message and dusts off its hands. So the event-driven approach decouples the sender of the message from the receiver. One advantage this affords you is that a given event may have multiple handlers. You could bind a gritRoads function to your snow event without affecting the existing shovelSnow handler. You have flexibility in the way your application is wired up; to turn off a behaviour you just need to remove the bind call rather than go hunting through the code to find all the instances of the behaviour. Another advantage of event-driven programming is that it gives you somewhere to put cross-cutting concerns. The event dispatcher plays the role of Mediator , and some libraries (such as Brighter ) utilise a pipeline so that you can easily plug-in generic requirements such as logging or quality-of-service. Full disclosure: Brighter is developed at Huddle, where I work. A third advantage of decoupling the sender of an event from the receiver is that it gives you flexibility in when you handle the event. You could process each type of event on its own thread (if your event dispatcher supports it), or you can put raised events onto a message broker such as RabbitMQ and handle them with an asynchronous process or even process them in bulk overnight. The receiver of the event could be in a separate process or on a separate machine. You don't have to change the code which raises the event to do this! This is the Big Idea behind "microservice" architectures: autonomous services communicate using events, with messaging middleware as the backbone of the application. For a rather different example of event-driven style, look to domain-driven design, where domain events are used to help keep aggregates separate. For example, consider an online store which recommends products based on your purchase history. A Customer needs to have its purchase history updated when a ShoppingCart is paid for. The ShoppingCart aggregate might notify the Customer by raising a CheckoutCompleted event; the Customer would get updated in a separate transaction in response to the event. The main downside of this event-driven model is indirection. It's now harder to find the code which handles the event because you can't just navigate to it using your IDE; you have to figure out where the event is bound in the configuration and hope that you've found all the handlers. There's more stuff to keep in your head at any one time. Code style conventions can help here (for example, putting all the calls to bind in one file). For the sake of your sanity, it's important to only use one event dispatcher and to use it consistently. Another disadvantage is that it's difficult to refactor events. If you need to change the format of an event you also need to change all the receivers. This is exacerbated when the subscribers of an event are on different machines, because now you need to synchronise software releases! In certain circumstances, performance may be a concern. When processing a message, the dispatcher has to: Look up the correct handlers in some data structure. Build a message processing pipeline for each handler. This may involve a bunch of memory allocations. Dynamically call the handlers (possibly using reflection if the language requires it). This is certainly slower than a regular function call, which only involves pushing a new frame on the stack. However, the flexibility that an event-driven architecture affords you makes it much easier to isolate and optimise slow code. Having the ability to submit work to an asynchronous processor is a big win here, as it allows you to serve a request immediately while the hard work is dealt with in the background. In any case, if you're interacting with the DB or drawing stuff on the screen then the costs of IO will totally swamp the costs of processing a message. It's a case of avoiding premature optimisation. In summary, events are a great way to build loosely coupled software, but they are not without cost. It would be a mistake, for example, to replace every function call in your application with an event. Use events to make meaningful architectural divisions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267752",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100972/"
]
} |
267,846 | In an example: var assets = "images/"
var sounds = assets+"sounds/" Is it more conventional to put the slash on the back of a file path? var assets = "/images"
var sounds = assets+"/sounds" Is there another method that is a good common practice? | Nearly every major programming language has a library to handle the directory separators for you. You should leverage them. This will simplify your code and prevent bugs . In my experience, the usual reason for combining strings like this is that they come from different sources. Sometimes it's different pieces from a configuration file. Sometimes it's a constant combining with a function argument. In any and all cases, when they come from different sources, you have to consider several different possible cases regarding the separators on the ends to be combined: Both ends could have a separator: "images/" and "/sounds" Only one has a separator: "images" and "/sounds" or "images/" and "sounds" Neither has a separator: "images" and "sounds" The fact each part comes from a different source means each source might have its own ideas about what conventions to follow, if someone gave any thought to it at all! Whatever is calling your code should not have to worry about this . Your code should handle all cases because someone will violate your convention . This will result in wasted time investigating the cause of an error and making a fix. I have had several unpleasant occasions where a coworker made an assumption about how paths should be formatted in a configuration file, meaning I had to go hunt down the code and figure out what they were expecting (or fix the code). Most major languages provide a method to do this for you that already handles many of the cases: os.path.join for Python File.join for Ruby Path.join for Node.js Paths.get for Java (7 and up) Path.Combine for .NET filesystem::path.operator+ for C++17 There is a caveat with these. A number of these seem to assume that a leading directory separator in the second argument refers to a root path and that this means the first argument should be dropped entirely. I don't know why this is considered useful; for me, it just causes problems. I've never wanted to combine two path portions and end up with the first part being dropped. Read the documentation carefully for special cases, and if necessary, write a wrapper that does what you want with these instead of their special handling. This additionally helps if you have any need for supporting different operating systems. These classes almost ubiquitously account for choosing the correct separator. The libraries usually have a way of normalizing paths to fit the OS conventions, as well. In the event that your programming language does not have a readily available library, you should write a method that handles all these cases and use it liberally and across projects. This falls into the category of "don't make assumptions" and "use tools that help you." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267846",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/162079/"
]
} |
267,858 | In projects which need heavy SDK usage, like Android and iOS development, I want the methods/functions I write to be obvious and since the whole framework is written with lower case method names I would like to have my method names start with upper case. However is this a good approach if there are going to be other programmers included in the project? Would this create mental overhead & confusion or would it really simplify things? {
didReceiveMemoryWarning(); // obviously a framework method
MemoryWarning(); // obviously custom method
item.height(); // obviously a framework method
item.SeventhDimensionOfItem(); // obviously custom extension
} | Nearly every major programming language has a library to handle the directory separators for you. You should leverage them. This will simplify your code and prevent bugs . In my experience, the usual reason for combining strings like this is that they come from different sources. Sometimes it's different pieces from a configuration file. Sometimes it's a constant combining with a function argument. In any and all cases, when they come from different sources, you have to consider several different possible cases regarding the separators on the ends to be combined: Both ends could have a separator: "images/" and "/sounds" Only one has a separator: "images" and "/sounds" or "images/" and "sounds" Neither has a separator: "images" and "sounds" The fact each part comes from a different source means each source might have its own ideas about what conventions to follow, if someone gave any thought to it at all! Whatever is calling your code should not have to worry about this . Your code should handle all cases because someone will violate your convention . This will result in wasted time investigating the cause of an error and making a fix. I have had several unpleasant occasions where a coworker made an assumption about how paths should be formatted in a configuration file, meaning I had to go hunt down the code and figure out what they were expecting (or fix the code). Most major languages provide a method to do this for you that already handles many of the cases: os.path.join for Python File.join for Ruby Path.join for Node.js Paths.get for Java (7 and up) Path.Combine for .NET filesystem::path.operator+ for C++17 There is a caveat with these. A number of these seem to assume that a leading directory separator in the second argument refers to a root path and that this means the first argument should be dropped entirely. I don't know why this is considered useful; for me, it just causes problems. I've never wanted to combine two path portions and end up with the first part being dropped. Read the documentation carefully for special cases, and if necessary, write a wrapper that does what you want with these instead of their special handling. This additionally helps if you have any need for supporting different operating systems. These classes almost ubiquitously account for choosing the correct separator. The libraries usually have a way of normalizing paths to fit the OS conventions, as well. In the event that your programming language does not have a readily available library, you should write a method that handles all these cases and use it liberally and across projects. This falls into the category of "don't make assumptions" and "use tools that help you." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267858",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/118745/"
]
} |
267,907 | I've heard people talk about business logic a lot at work, and online, and I've read several questions on this site about it, but the term still doesn't make a lot of sense to me. For example, here are some (paraphrased) statements I often see: "The business logic is the part of your program that encodes the actual business rules." Most of the definitions I've read are circular ones like this. "Business logic is everything unique to your particular application." I don't see how this is different from "your particular application is nothing but business logic", unless we accidentally reinvented a bunch of wheels we could have used existing 3rd party software for. Hence the question title. "There should be a Business Logic Layer above your Data Access Layer and below your GUI Layer." In the code I write, the database accessors have to know what data they're supposed to be accessing, and the UI code has to know a lot about what it's displaying in order to display it correctly, and there's nothing to really do in between those two places other than passing blobs of data between client and server. So what's actually supposed to go into a Business Logic Layer? "Business logic should be separate from presentation logic." Most of the feature requests we get are to change the presentation logic for business reasons. If one of the business rules is to display US government bond prices in 32nds notation by default (while also providing a UI for the user to configure that), the presentation logic needs to at least know this rule exists, if not fully implement it. Also, it seems like a major part of UX design is helping the user understand the business rules our software is trying to implement. Is it possible that I actually am on a team that only does business logic, and all the non-business logic is being done by other teams? Or is the whole concept of "business logic" as a separate entity only workable for certain applications or architectures? To help make the answers concrete: Pretend you have to reimplement the Domino's Pizza app. What is the business logic, and what is the non-business logic of that app? And how would it be possible to put that pizza-ordering business logic in its own "layer" of code, without most of the pizza information bleeding into the data access and presentation layers? Update: I've come to the conclusion that my team is probably doing 90% UI code and most--but not all--of what you'd call business logic comes from other teams or companies. Basically, our application is for monitoring financial data, and almost all of the features are ways for the user to customize what data they see and how they see it. There is no buying or selling going on (though we integrate a bit with other apps from our company that do that), and the actual data is supplied by loads of external sources. But we do allow users to do things like send copies of their "monitors" to other users, so the details of how we handle that probably qualify as business logic. There actually is a mobile app that currently talks to some of our backend code, and I know exactly what portion of our frontend code I would like it to share with our UI in an ideal world (basically the M in our quasi-MVC) so I'm guessing that's the BLL for us. I'm accepting user61852's answer since it gave me a much more concrete understanding of what "business logic" does and doesn't refer to. | I'll give you some tips regarding CRUD applications, since I don't have much experience in games or graphically intensive apps: Business logic usually involves rules the owner of the business has learned or decided over years of operation, like for example: "reject any new credit if the client hasn't yet finished paying the last one" , or "we don't sell breakfast past 11 am" , or "mondays and tuesdays, customers can buy two pizzas for the price of one" . Of course the presentation layer must show a message indicating the availability of a discount, or the reason of a credit being rejected, but such layer is not deciding those things, it's only communicating to the user something that happened under the hood. Business logic usually involves a workflow , for example: "this item must be aproved within 3 working days by some manager or put into a 'request for information' stage, if customer hasn't submitted the required documents, then the item is rejected" . Presentation layer usually doesn't deal with that kind of workflow, it only reflects the states of the workflow. Also, data access layer is usually straightforward, because decisions have already being made by the business logic. This layer is affected when you decide to migrate your data from MS SQL Server to Oracle, for example It's true sometimes GUI does some validation to avoid calls to the server, but that is something that should be done judiciously or you could have a lot of business logic in that layer. Much of your confusion may have arisen from the fact that in your application there's no separation of concerns and effectively you have too much business logic in the presentation layer. The fact that you (wrongly) have business logic in your presentation layer or data access layer doesn't change the fact that it's business logic nonetheless. Things like displaying distances in the metric system instead of miles/yards/feet is not presentation logic, it's business logic . The business layer has to return data in the required units and tell the presentation layer what units it's handling for it to show the appropiate labels, but it's definitely a business logic concern. Business logic shouldn't be affected by the fact that you are using Oracle now instead of Postgres, or by the fact the company changed it's logo or style sheet. Business rules exist whether or not you automate them by writing an app. They can be enforced even when the business uses a low tech solution like pen and paper. If you have a mobile version of your desktop app, or a web version , each one of those versions have a different presentation layer , but (hopefully) the same business layer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267907",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161917/"
]
} |
267,918 | In my short time programming, it has been trivial to compile any of my C++, Java, etc. for either a 32 or 64 bit machine so long as I have the full source for the program. But a lot of software is not released 64bit.
Most annoyingly, there isn't yet a 64bit release of the Unity engine. What makes it difficult to compile some programs for 64 bit machines? | The general problem is that it’s very easy to encode undocumented assumptions in a program, and very hard to find places where those assumptions were made. High-level languages tend to insulate us from these concerns somewhat, but in lower-level languages used for implementing platforms and services, it’s easy to do things that are not necessarily portable across architectures: Assuming that int is large enough to store a pointer Assuming properties of the representation of pointers, e.g., for pointer tagging Assuming that data pointers and code pointers have the same size There is also the practical concern of release management. If I only make an x86 build, it will still run on x86-64, albeit perhaps more slowly due to the limited availability of registers. Whereas if I build for both x86 and x86-64, I must now test on both architectures and deal with bugs that may only arise on one architecture, increasing the cost of shipping a new release. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267918",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/113314/"
]
} |
267,981 | The title says it all, the more I do research about piracy and the tools used to crack an app the more I think it is just wasted time. My biggest threat is that my app is being cracked and uploaded again within minutes after I released it. But I'm not sure how harmful it will be to my revenue, if there even will be revenue. Maybe someone can come up with some experience, it's my first app which I'm going to publish and I have no experience with that. My app is going to be free with ads. I think this is another line of defense because in my opinion, the main focus is on paid Apps due they are not available in a lot of countries.. My current approach is that I will just use proguard not really to make it harder to crack (more likely understand the code), because you don't even need to understand the code if you just simply resign it and upload the same application to the playstore, you will just need to change the package name. My main point of using proguard is just to shrink the app. Is this a "good" approach? | No. Most applications from large developers with real, industrial grade copy protection appear in torrents, cracked, within days of release. It is extremely doubtful that a smaller developer can match that. Trying to will just waste your time, leaving less time for you to develop features/apps that make money. You may want to do trivial work to keep "casual" copiers at bay, but by trivial, I mean something you can throw together in a few hours. (I.e. something like proGuard.) You will not stop the people with technical knowhow who actively want to crack your app. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/267981",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157863/"
]
} |
268,066 | As it is often classified at school/college level, popular programming languages (C#, Java, C++) are all 3rd generation languages (with higher level of abstraction from the machine's physical parts). Also, assembly languages are classified as 2nd and machine languages as 1st generation languages. Initially I was thinking SQL should be considered 4th generation language cause it is more abstract and far away from details of looping and more descriptive. Just now I found out there are 4th generation and 5th generation languages, but what is the basis for any programming language or technology to be categorized as a 4th or 5th generation language? Also, are there 6th or 7th generation languages as well? | “nth-generation language” is a buzzword. It is a marketing term. There is no universally accepted definition of what exactly defines the “nth generation” for n > 2. Some people categorize “scripting” languages such as Perl or Python as 4GLs because they are much more high-level than C, while others think the defining characteristics of 4GLs is that they're domain-specific, e.g. SQL. Some nitwits even think that Java (a mid-90s language full of object orientation and garbage collection and reflection) belongs in the same “3GL” category as Fortran (from the 50s) and C (from the 70s). A categorization so confused such as “4th generation language” is of no use. You may see it in old textbooks, or hear it from people that started programming in the 80s, but a tag such as “4GL” is worthless without an accompanying explanation of what exactly the author means by that. Since no one immediately understands what you mean by “4GL”, you should not use such categorizations. Instead, use specific terms to communicate precisely what you mean. E.g. all of NASM, LLVM IR, and Jasmin are assembly languages, but the latter two target VMs, and the last one is also an object-oriented programming language. Are all of those 2GLs? SQL is a partially declarative, domain specific language for database queries from the 80s. And TeX is a domain specific language for typesetting from the 70s. Are they both 4GLs since they are both more or less domain specific? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/268066",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20211/"
]
} |
268,124 | In JavaScript, the Good Parts , Douglas Crockford wrote: JavaScript has two sets of equality operators: === and !== , and their evil twins == and != . The good ones work the way you would expect. If the two operands are of the same type and have the same value, then === produces true and !== produces false . The evil twins do the right thing when the operands are of the same type, but if they are of different types, they attempt to coerce the values. The rules by which they do that are complicated and unmemorable. These are some of the interesting cases: '' == '0' // false
0 == '' // true
0 == '0' // true
false == 'false' // false
false == '0' // true
false == undefined // false
false == null // false
null == undefined // true
' \t\r\n ' == 0 // true The lack of transitivity is alarming. My advice is to never use the evil twins. Instead, always use === and !== . All of the comparisons just shown produce false with the === operator. Given this unequivocal observation, is there ever a time when using == might actually be appropriate? | I'm going to make an argument for == Douglas Crockford which you cited is known for his many and often very useful opinions. While I'm with Crockford in this particular case it's worth mentioning it is not the only opinion. There are others like language creator Brendan Eich who don't see the big problem with == . The argument goes a little like the following: JavaScript is a behaviorally* typed language. Things are treated based on what they can do and not their actual type. This is why you can call an array's .map method on a NodeList or on a jQuery selection set. It's also why you can do 3 - "5" and get something meaningful back - because "5" can act like a number. When you perform a == equality you are comparing the contents of a variable rather than its type . Here are some cases where this is useful: Reading a number from the user - read the .value of an input element in the DOM? No problem! You don't have to start casting it or worrying about its type - you can == it right away to numbers and get something meaningful back. Need to check for the "existence" of a declared variable? - you can == null it since behaviorally null represents there is nothing there and undefined doesn't have anything there either. Need to check if you got meaningful input from a user? - check if the input is false with the == argument, it will treat cases the user has entered nothing or just white-space for you which is probably what you need. Let's look at Crockford's examples and explain them behaviorally: '' == '0' // got input from user vs. didn't get input - so false
0 == '' // number representing empty and string representing empty - so true
0 == '0' // these both behave as the number 0 when added to numbers - so true
false == 'false' // false vs got input from user which is truthy - so false
false == '0' // both can substitute for 0 as numbers - so again true
false == undefined // having nothing is not the same as having a false value - so false
false == null // having empty is not the same as having a false value - so false
null == undefined // both don't represent a value - so true
' \t\r\n ' == 0 // didn't get meaningful input from user vs falsey number - true Basically, == is designed to work based on how primitives behave in JavaScript, not based on what they are . While I don't personally agree with this point of view there is definitely merit in doing it - especially if you take this paradigm of treating types based on behavior language-wide. * some might prefer the name structural typing which is more common but there is a difference - not really interested in discussing the difference here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/268124",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1204/"
]
} |
269,259 | If you had a pretty novel idea that involved a new algorithm. Like lets say you were the first person to think of orbitz.com or kayak.com (travel sites) and you were able to and wanted to implement all of the novel algorithm clientside in javascript. Is it possible to obfuscate it 100% where you cant reverse engineer it like this reverse engineer javascript obfuscators , or are developers still forced to keep sensitive algorithms server side if they want to completely protect them (or at least have a better chance of doing so)? | Client-side Javascript cannot be secured - ever. If the browser can run it, that means that the Javascript instructions are 100% available to anyone who wants them. You can compress and obscure the Javascript. That is merely an obstacle that any talented developer can get around with some extra investment of time. So, a novel algorithm cannot be protected in browser-based Javascript. If you want an algorithm protected, it must reside on the server and be run there - delivering only results to the client, not delivering code to the client. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/269259",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/74715/"
]
} |
269,294 | Is it possible to write code (or complete software, rather than a piece of code) that won't work properly when run on a CPU that has less than N number of cores? Without checking it explicitly and failing on purpose: IF (noOfCores < 4) THEN don't run properly on purpose I'm looking at a game's ( Dragon Age: Inquisition ) minimum system requirements, and it states a minimum of a four-core CPU. Many players say it does NOT run on two-core CPU's and EVEN on Intel Core i3s with two physical and two logical cores. And it's NOT a problem of computing power. From my understanding, threads are completely isolated from the CPU by the OS since that cannot be done. Just to clear things out: I am NOT asking "Can I find out the number of CPU cores from code, and fail on purpose?" ... Such code would be ill-intentioned (forces you to buy a more expensive CPU to run a program - without the need of computational power). I am asking that your code, say, has four threads and fails when two threads are run on the same physical core (without explicitly checking system information and purposely failing) . In short, can there be software that requires multiple cores, without needing additional computing power that comes from multiple cores? It would just require N separate physical cores. | It may be possible to do this "by accident" with careless use of core affinity. Consider the following pseudocode: start a thread in that thread, find out which core it is running on set its CPU affinity to that core start doing something computationally intensive / loop forever If you start four of those on a two-core CPU, then either something goes wrong with the core affinity setting or you end up with two threads hogging the available cores and two threads that never get scheduled. At no point has it explicitly asked how many cores there are in total. (If you have long-running threads, setting CPU affinity generally improves throughput) The idea that game companies are "forcing" people to buy more expensive hardware for no good reason is not very plausible. It can only lose them customers. Edit: this post has now got 33 upvotes, which is quite a lot given that it's based on educated guesswork! It seems that people have got DA:I to run, badly, on dual-core systems: http://www.dsogaming.com/pc-performance-analyses/dragon-age-inquisition-pc-performance-analysis/ That analysis mentions that the situation greatly improves if hyperthreading is turned on. Given that HT does not add any more instruction issue units or cache, it merely allows one thread to run while another is in a cache stall, that suggests strongly that it's linked to purely the number of threads. Another poster claims that changing the graphics drivers works: http://answers.ea.com/t5/Dragon-Age-Inquisition/Working-solution-for-Intel-dual-core-CPUs/td-p/3994141 ; given that graphics drivers tend to be a wretched hive of scum and villany, this isn't surprising. One notorious set of drivers had a "correct&slow" versus "fast&incorrect" mode that was selected if called from QUAKE.EXE. It's entirely possible that the drivers behave differently for different numbers of apparent CPUs. Perhaps (back to speculation) a different synchronisation mechanism is used. Misuse of spinlocks ? "Misuse of locking and synchronisation primitives" is a very, very common source of bugs. (The bug I'm supposed to be looking at at work while writing this is "crash if changing printer settings at same time as print job finishes"). Edit 2: comments mention OS attempting to avoid thread starvation. Note that the game may have its own internal quasi-scheduler for assigning work to threads, and there will be a similar mechanism in the graphics card itself (which is effectively a multitasking system of its own). Chances of a bug in one of those or the interaction between them are quite high. www.ecsl.cs.sunysb.edu/tr/ashok.pdf (2008) is a graduate thesis on better scheduling for graphics cards which explicitly mentions that they normally use first-come-first-served scheduling, which is easy to implement in non-preemptive systems. Has the situation improved? Probably not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/269294",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88565/"
]
} |
269,439 | I have a need to explain MVC to non-programmers. Namely, to managers of other departments, in the context of progress report. One of the things I do is refactor our codebase towards MVC separation. What is MVC separation they might ask?
Why is it needed they might ask? After reading a fairly technical answer like this: What is MVC, really? , I am not entirely satisfied, since I will be talking to non-programmers. They may nod their heads but they probably will not understand what it is and why it is needed. In reality, I don't fully grasp what MVC is other than "separation of concerns, duties, functions, classes, blocks, tasks, things, in order to improve flexibility of making changes to the software". Separating database from view and view from business logic using techniques like DI and OO tools and techniques is something I consider to be MVC separation. So next time you are explaining MVC to a non-programmer who has background in sales and accounting for example, what would you tell them? | You don't explain MVC. What you do is explain that this is a restructuring of the codebase. A restructuring that simplifies the codebase and therefore enables the developers to make faster and better changes to bug reports and feature requests, with less bugs. They don't need to know the technical details, just why it was done, what was achieved by doing it and how the business benefits. In other words - speak their language to them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/269439",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119333/"
]
} |
269,722 | I've got a project. In this project I wished to refactor it to add a feature, and I refactored the project to add the feature. The problem is that when I was done, it turned out that I needed to make a minor interface change to accommodate it. So I made the change. And then the consuming class can't be implemented with its current interface in terms of the new one, so it needs a new interface as well. Now it's three months later, and I've had to fix innumerable virtually unrelated problems, and I'm looking at solving issues that were roadmapped for a year from now or simply listed as won't fix due to difficulty before the thing will compile again. How can I avoid this kind of cascading refactorings in the future? Is it just a symptom of my previous classes depending too tightly on each other? Brief edit: In this case, the refactor was the feature, since the refactor increased the extensibility of a particular piece of code and decreased some coupling. This meant that external developers could do more, which was the feature I wanted to deliver. So the original refactor itself should not have been a functional change. Bigger edit that I promised five days ago: Before I began this refactor, I had a system where I had an interface, but in the implementation, I simply dynamic_cast through all the possible implementations that I shipped. This obviously meant that you couldn't just inherit from the interface, for one thing, and secondly, that it would be impossible for anybody without implementation access to implement this interface. So I decided that I wanted to fix this issue and open up the interface for public consumption so that anybody could implement it and that implementing the interface was the entire contract required- obviously an improvement. When I was finding and killing-with-fire all the places that I had done this, I found one place that proved to be a particular problem. It depended upon implementation details of all the various deriving classes and duplicated functionality that was already implemented but better somewhere else. It could have been implemented in terms of the public interface instead and re-used the existing implementation of that functionality. I discovered that it required a particular piece of context to function correctly. Roughly speaking, the calling previous implementation looked kinda like for(auto&& a : as) {
f(a);
} However, to get this context, I needed to change it into something more like std::vector<Context> contexts;
for(auto&& a : as)
contexts.push_back(g(a));
do_thing_now_we_have_contexts();
for(auto&& con : contexts)
f(con); This means that for all operations that used to be a part of f , some of them need to be made a part of the new function g that operates without a context, and some of them need to be made of a part of the now-deferred f . But not all methods f call need or want this context- some of them need a distinct context that they obtain through separate means. So for everything that f ends up calling (which is, roughly speaking, pretty much everything ), I had to determine what, if any, context they needed, where they should get it from, and how to split them from old f into new f and new g . And that's how I ended up where I am now. The only reason that I kept going is because I needed this refactoring for other reasons anyway. | Is it just a symptom of my previous classes depending too tightly on each other? Sure. One change causing a myriad of other changes is pretty much the definition of coupling. How do I avoid cascading refactors? In the worst sort of codebases, a single change will continue to cascade, eventually causing you to change (almost) everything. Part of any refactor where there is widespread coupling is to isolate the part you're working on. You need to refactor not just where your new feature is touching this code, but where everything else touches that code. Usually that means making some adapters to help the old code work with something that looks and acts like the old code, but uses the new implementation/interface. After all, if all you do is change the interface/implementation but leave the coupling you're not gaining anything. It's lipstick on a pig. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/269722",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8553/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.