source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
269,825
To give a slightly contrived example, let's say I want to test that a function returns two numbers, and that the first one is smaller than the second one: def test_length(): result = my_function() assert len(result) == 2 def test_order() a, b = my_function() assert a < b Here, if test_length fails, then test_order will fail too. Is it a best practice to write test_length , or to skip it? EDIT: note that in this situation, both tests are mostly independent from each other, each one can be run in isolation, or they could be run in reverse order, this does not matter. So none of these former questions How should I test the functionality of a function that uses other functions in it? Do I need unit test if I already have integration test? How to structure tests where one test is another test's setup? How to manage success dependency between unit tests is a duplicate of the one above.
There can be value, but this is a bit of a smell. Either your tests aren't well isolated (since test_order really tests two things) or you're being too dogmatic in your testing (making two tests testing the same logical thing). In the example, I would merge the two tests together. Yes, it means you have multiple asserts. Too bad. You're still testing a single thing - the result of the function. Sometimes in the real world that means doing two checks.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/269825", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37823/" ] }
269,882
Myself, I can't wait to write a function when I need to do something more than twice. But when it comes to things that only appear twice, it's a bit more tricky. For code that needs more than two lines, I'll write a function. But when facing things like: print "Hi, Tom" print "Hi, Mary" I'm hesitant to write: def greeting(name): print "Hi, " + name greeting('Tom') greeting('Mary') The second one seems too much, doesn't it? But what if we have: for name in vip_list: print name for name in guest_list: print name And here is the alternative: def print_name(name_list): for name in name_list: print name print_name(vip_list) print_name(guest_list) Things become tricky, no? It's hard to decide now. What's your opinion about this?
Although it's one factor in deciding to split off a function, the number of times something is repeated shouldn't be the only factor. It often makes sense to create a function for something that's only executed once . Generally, you want to split a function when: It simplifies each individual abstraction layer. You have good, meaningful names for the split-off functions, so you don't usually need to jump around between abstraction layers to understand what's going on. Your examples don't meet that criteria. You're going from a one-liner to a one-liner, and the names don't really buy you anything in terms of clarity. That being said, functions that simple are rare outside of tutorials and school assignments. Most programmers tend to err too far the other way.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/269882", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/132579/" ] }
269,953
I am using Visual Studio to create a GUI application in C#. The Toolbox serves as a nifty component palette that allows me to easily drag and drop buttons and other elements (for clarity I'll say button whenever I mean "control") onto my form, which makes static forms quite easy to do. However, I run into two problems: Creating the buttons in the first place is a lot of work. When I have a form that is not static (ie. buttons or other controls are created at run time according to what the user does) I cannot use the palette at all. Instead I have to create each button manually, by calling the constructor in whatever method I am using, and then manually initialize by specifying button height, width, position, label, event handler and so on. This is extremely tedious because I have to guess at all these cosmetic parameters without being able to see what the form will look like, and it also generates many lines of repetitive code for each button. Making the buttons do something is also a lot of work. Dealing with events in a full-featured application is a huge pain. The only way I know how to do this is to select a button, go to the events tab in its properties, click the OnClick event so that it generates the event in the Form 's code, then fill in the body of the event. Since I want to separate logic and presentation, all of my event handlers end up being single-line calls to the appropriate business logic function. But using this for many buttons (for instance, imagine the number of buttons present in an application like MS Word) pollutes the code of my Form with dozens of boilerplate event handler methods and it's difficult to maintain this. Because of these, any GUI program more complicated than Hello World is very impractical to actually make for me. To be clear, I have no problems whatsoever dealing with complexity in programs that I write which have minimal UI - I feel like I am able to use OOP with a fair degree of competence to neatly structure my business logic code. But when developing the GUI, I'm stuck. It seems so tedious that I feel like I'm reinventing the wheel, and there's a book somewhere out there explaining how to do GUI properly that I haven't read. Am I missing something? Or do all C# developers just accept the endless lists of repetitive event handlers and button creation code? As a (hopefully helpful) hint, I expect that a good answer will talk about: Using OOP techniques (such as the factory pattern) to simplify repeated button creation Combining many event handlers into a single method that checks Sender to figure out which button called it, and behaves accordingly XAML and using WPF instead of Windows Forms You don't have to mention any of these, of course. It's just my best guess as to what sort of answer I'm looking for.
There are several methodologies that have evolved over the years to deal with these issues you've mentioned, which are, I agree, the two main issues that UI frameworks have had to address in recent years. Coming from a WPF background, these are approached as follows: Declarative design, rather than imperative When you describe painstakingly writing code to instantiate controls and set their properties, you're describing the imperative model of UI design. Even with the WinForms designer, you're just using a wrapper over that model - open the Form1.Designer.cs file and you see all that code sitting there. With WPF and XAML - and similar models in other frameworks, of course, from HTML onwards - you're expected to describe your layout, and let the framework do the heavy lifting of implementing it. You use smarter controls such as Panels (Grid, or WrapPanel, etc) to describe the relationship between UI elements, but the expectation is that you don't manually position them. Flexible concepts such as repeatable controls - like ASP.NET's Repeater or WPF's ItemsControl - help you create dynamically scaling UI without writing repeating code, allowing dynamically growing data entities to be represented dynamically as controls. WPF's DataTemplates allow you to define - again, declaratively - little nuggets of UI goodness and match them to your data; for instance, a list of Customer data objects might be bound to an ItemsControl, and a different data template invoked depending on whether he's a regular employee (use a standard grid row template with name and address) or a Prime Customer, whereas a different template, with a picture and easy-to-access buttons are displayed. Again, without writing code in the specific window, just smarter controls that are aware of their data-context, and thus allow you to simply bind them to your data and let them perform relevant operations. Data Binding and Command Separation Touching on data binding, WPF's databinding (and, again, in other frameworks as well, like AngularJS) allows you to state your intent by linking a control (say, a textbox) with a data entity (say, the Customer's Name) and let the framework handle the plumbing. Logic is handled similarly. Instead of manually wiring up code-behind event-handlers to Controller-based business-logic, you use the Data Binding mechanism to link a controller's behavior (say, a button's Command property) to a Command object which represents the nugget of activity. It allows this Command to be shared between windows without rewriting the event handlers every time. Higher Level of Abstraction Both of these solutions to your two problems represent a move to a higher level of abstraction than the event-driven paradigm of Windows Forms that you rightfully find tiresome. The idea is that you don't want to define, in code, every single property that the control has, and every single behavior starting from the button click and onwards. You want your controls and underlying framework to do more work for you and allow you to think in more abstract concepts of Data Binding (which exists in WinForms, but is nowhere near as useful as in WPF) and the Command pattern to define links between UI and behavior that don't require getting down to the metal. The MVVM pattern is Microsoft's approach to this paradigm. I suggest reading up on that. This is a rough example of how this model would look and how it would save you time and lines of code. This won't compile, but it's pseudo-WPF. :) Somewhere in your application resources, you define Data Templates: <DataTemplate x:DataType="Customer"> <TextBox Text="{Binding Name}"/> </DataTemplate> <DataTemplate x:DataType="PrimeCustomer"> <Image Source="GoldStar.png"/> <TextBox Text="{Binding Name}"/> </DataTemplate> Now, you link your main screen to a ViewModel, a class which exposes your data. Say, a collection of Customers (simply a List<Customer> ) and a Command object (again, a simple public property of type ICommand ). This link allows binding: public class CustomersnViewModel { public List<Customer> Customers {get;} public ICommand RefreshCustomerListCommand {get;} } and the UI: <ListBox ItemsSource="{Binding Customers}"/> <Button Command="{Binding RefreshCustomerListCommand}">Refresh</Button> And that's it. The ListBox's syntax will grab the list of Customers off of the ViewModel and attempt to render them to the UI. Because of the two DataTemplates we defined earlier, it will import the relevant template (based on the DataType, assuming PrimeCustomer inherits from Customer) and put it as the contents of the ListBox. No looping, no dynamic generation of controls via code. The Button, similarly, has preexisting syntax to link its behavior to an ICommand implementation, which presumably knows to update the Customers property - prompting the data-binding framework to update the UI again, automatically. I've taken some shortcuts here, of course, but this is the gist of it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/269953", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/48337/" ] }
270,091
Reading through the Apache Software License 2.0 appendix I am left unclear what exactly I have to do now: APPENDIX: How to apply the Apache License to your work To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. To explain a bit more: I am currently contributing to an open-source project on github, where the decision to license under ASL2.0 was made. In the latest commit, the license text was added as LICENSE to the root directory. The interesting part is now: What else has to be done to fully license the code under Apache Software License. In specific: What is considered "work" and is it mandatory to include the boilerplate notice to each and every source-file in the project? I figure it should be possible to add the notice to the already existing README , as I have seen it in other open source projects. Something along the lines of: License: Unless explicitly stated otherwise all files in this repository are licensed under the Apache Software License 2.0 [insert boilerplate notice here] I think that should be okay, because the Apache Policy on Source Headers (even though intended for Apache Software Foundation owned projects only) states: Each original source document (code and documentation, but excluding the LICENSE and NOTICE files) SHOULD include a short license header at the top. If the distribution contains documents not covered by CLA, CCLA or Software Grant (such as third-party libraries) then see the policy guide. Each source file should include the following license header -- note that there should be no copyright notice in the header: Specifically the "should" here makes me think, that a license header in every file is not mandatory for each and every source-file of a project licensed under ASL.
No, it is not necessary to include the license in every file. This is a recommended practice, because it ensures that if somebody is viewing one of the files from your project in isolation from the rest they will be able to identify the terms of use for it, but in the end, as long as you do something that makes it clear what the intended license terms are, that is enough, legally speaking. (Note that this is not legal advice; if this is important you should seek advice of a lawyer in your jurisdiction, etc.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270091", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99078/" ] }
270,152
I think the most common way of adding something to a collection is to use some kind of Add method that a collection provides: class Item {} var items = new List<Item>(); items.Add(new Item()); and there is actually nothing unusual about that. I wonder however why don't we do it this way: var item = new Item(); item.AddTo(items); it seems to be somehow more natural then the first method. This would have the andvantange that when the Item class has a property like Parent : class Item { public object Parent { get; private set; } } you can make the setter private. In this case of course you cannot use an extension method. But perhaps I'm wrong and I've just never seen this pattern before because it's so uncommon? Do you know whether there is any such pattern? In C# an extension method would be useful for that public static T AddTo(this T item, IList<T> list) { list.Add(item); return item; } How about other languages? I guess in most of them the Item class had to provide a let's call it ICollectionItem interface. Update-1 I've been thinking about it a little bit more and this pattern would be really usefull for example if you don't want an item to be added to multiple collections. test ICollectable interface: interface ICollectable<T> { // Gets a value indicating whether the item can be in multiple collections. bool CanBeInMultipleCollections { get; } // Gets a list of item's owners. List<ICollection<T>> Owners { get; } // Adds the item to a collection. ICollectable<T> AddTo(ICollection<T> collection); // Removes the item from a collection. ICollectable<T> RemoveFrom(ICollection<T> collection); // Checks if the item is in a collection. bool IsIn(ICollection<T> collection); } and a sample implementation: class NodeList : List<NodeList>, ICollectable<NodeList> { #region ICollectable implementation. List<ICollection<NodeList>> owners = new List<ICollection<NodeList>>(); public bool CanBeInMultipleCollections { get { return false; } } public ICollectable<NodeList> AddTo(ICollection<NodeList> collection) { if (IsIn(collection)) { throw new InvalidOperationException("Item already added."); } if (!CanBeInMultipleCollections) { bool isInAnotherCollection = owners.Count > 0; if (isInAnotherCollection) { throw new InvalidOperationException("Item is already in another collection."); } } collection.Add(this); owners.Add(collection); return this; } public ICollectable<NodeList> RemoveFrom(ICollection<NodeList> collection) { owners.Remove(collection); collection.Remove(this); return this; } public List<ICollection<NodeList>> Owners { get { return owners; } } public bool IsIn(ICollection<NodeList> collection) { return collection.Contains(this); } #endregion } usage: var rootNodeList1 = new NodeList(); var rootNodeList2 = new NodeList(); var subNodeList4 = new NodeList().AddTo(rootNodeList1); // Let's move it to the other root node: subNodeList4.RemoveFrom(rootNodeList1).AddTo(rootNodeList2); // Let's try to add it to the first root node again... // and it will throw an exception because it can be in only one collection at the same time. subNodeList4.AddTo(rootNodeList1);
No, item.AddTo(items) it is not more natural. I think you mix this up with the following: t3chb0t.Add(item).To(items) You are right in that items.Add(item) is not very near to the natural english language. But you also don't hear item.AddTo(items) in natural english language, do you? Normally there is someone who supposed to add the item to the list. Be it when working at a supermarket or while cooking and adding ingredients. In the case of programming languages, we made it so that a list does both: storing its items and being responsible for adding them to itself. The problem with your approach is that an item has to be aware that a list exists. But an item could exist even if there were no lists at all, right? This shows that it should not be aware of lists. Lists should not occur in its code in any way at all. Lists however don't exist without items (at least they would be useless). Therefore its fine if they know about their items - at least in a generic form.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270152", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/160257/" ] }
270,218
In Learn SQL the Hard Way (exercise six) , the author presents the following query: SELECT pet.id, pet.name, pet.age, pet.dead FROM pet, person_pet, person WHERE pet.id = person_pet.pet_id AND person_pet.person_id = person.id AND person.first_name = "Zed"; and then goes on to say that: There are actually other ways to get these kinds of queries to work called "joins". I'm avoiding those concepts for now because they are insanely confusing. Just stick to this way of joining tables for now and ignore people who try to tell [you] that this is somehow slower or "low class". Is that true? Why or why not?
With the author's approach, teaching OUTER JOINs is going to much more difficult. The ON clause in INNER JOIN was never mind-blowing to me like a lot of other stuff. Maybe it is because I never learned the old way. I'd like to think there is a reason we got rid of it and it wasn't to be smug and call this method low class. It's true in the very narrow scenario the author has created: Such an entry level of SQL that using ON is complex Only considering JOIN/INNER JOIN and not any OUTER JOINs The isolated coder who doesn't have to read other's code nor have any people with experience with the ON usage reading/using their code. Not requiring complex querying with lots of: tables, if's, but's and or's. As part of a teaching progression, I think it is easier to break it down and have a natural progression: Select * from table select this, something, that from table select this from table where that = 'this' select this from table join anothertable on this.id = that.thisid The concepts of joining and filtering tables are not really the same. Learning the correct syntax now will have more carry-over when you learn OUTER JOINS unless the author intends on teaching outdated/deprecated things like: *= or =* .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270218", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1204/" ] }
270,321
I'm trying to choose a Git workflow that is most appropriate for our product. Here are the parameters: We do a few major releases a year, let's say 10 at the most We have multiple versions of our product active at the same time (some people are on v10.1, some on v11.2, etc.) We need to be able to work on multiple releases at the same time (so we could be working on v12.1, but as we get to the end of the release we start working on v12.2 at the same time) We need to be able to hotfix releases when critical bugs are found So far, here's the way I think it could work: Single remote repo is used Create branch 12.1 from master Create feature branches based on 12.1, commit them and merge back into 12.1, push Once we need to start working on future release, create a new branch 12.2 based on 12.1 From then on, when working on a feature for 12.1, create branch from 12.1, commit changes, and merge into both 12.1 and 12.2, push If working on a feature for 12.2, create branch from 12.2, commit changes, and merge only into 12.2, push When release 12.1 is completed, merge it into master and tag master branch with 12.1 If a hotfix is needed, create a hotfix branch from oldest release branch that needs it, commit changes, and merge back into all release branches for that release and future releases that could be affected; if the latest stable release branch was affected, merge it into master. I have a few concerns: I'm not sure that merging hotfixes from old branches into new branches will be a smooth process, especially if there have been a lot of overlapping changes; would it be smarter to just hotfix manually in each branch in cases where it looks like there will be conflicts The workflow models I've seen seem to not keep release branches alive much, once done the release gets merged into master, tagged, and removed. My problem with that is that I don't have a good idea how to manage the state of the release if all I have are tags in master, seems easier to hotfix in a branch and then I have a release I can always go back to that has the latest hotfix (I can even tag the hotfixes in the release). Not sure there's a way I could go back within master and somehow have a copy of the release with hotfixes applied and update that tag. Comments are appreciated on things I may have overlooked or better ways of accomplishing things given the requirements I've specified.
You seem to be branching off on every major release (12.0.0), then having possible minor updates to each (12.1.0), and hot fixes (12.2.1). Correct? There's no specific reason why you cannot keep release branches alive in GitFlow after a release is out, other than the fact that coordinating changes between multiple diverging branches for a long time is hard with any model. I suppose GitFlow was also modeled more for products that maintain a single live release while developing the next. I would stick with GitFlow and make a few amendments; Skip the master branch. I've had no practical use of it so far, and it would lose its linearity the way you work. Keep development on the next major release on develop. If you decide to keep master, don't put release tags on master, put them on the last commit on the release branch that produced the binary you're shipping. Don't throw away the release branches after you merge them back to develop. Instead keep them around for the next minor release and possible hot fixes. If you ever stop supporting a release, I suppose it's fine to delete them. You could name release branches after their main component, release/12, and then create sub-release branches, release/12.1, release/12.2 off of it. I've not had to worry too much about this level of parallelism, but that's probably what I'd try. You can think of each major release branch as its own sub-GitFlow environment in this case. If you must be working in parallel on features for several future major releases at the same time, perhaps you have to keep the next one (13) on develop and anything for later versions (14, 15) on additional "develop-N" branches. That does seem very hard to maintain in general, but would be possible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270321", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81284/" ] }
270,580
When programming sometimes things break. You made a mistake and your program tries to read from a wrong address. One thing that stands out to me that often those exceptions are like : Access violation at address 012D37BC in module 'myprog.exe'. Read of address 0000000C. Now I see a lot of error logs and what stands out to me is the : 0000000C. Is this a "special" address? I see other access violations with bad reads but the addresses just seem random, but this one keeps coming back in totally different situations.
00000000 is a special address (the null pointer). 0000000C is just what you get when you add an offset of 12 to the null pointer, most likely because someone tried to get the z member of a structure like the one below through a pointer that was actually null. struct Foo { int w, x, y; // or anything else that takes 12 bytes including padding // such as: uint64_t w; char x; // or: void *w; char padding[8]; // all assuming an ordinary 32 bit x86 system int z; }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270580", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51367/" ] }
270,595
I have got 2 days for making a very serious decision about the tools and platforms that my company is going to use in order to port its WPF application to Linux / Android / iOS whatnot. Obviously I can point to my seniors that 2 days is hardly enough for reading about all possible options, and what about trying, making prototypes etc. I can say it, it won't help me a bit, I have got 2 days, and after 2 days the decision would be made. Period. From one side I am frustrated, from other side I think there is a grain of truth in this approach, otherwise I can easily find myself buried under dozens of downloaded SDKs, frameworks, APIs, blog articles etc etc doing bench-works, running samples and forgetting in the process what it all was for. Still I am afraid that a wrong decision will cost the company dearly. So what do you think is an "ideal" process for making such decisions?
If all you have is 2 days and no time to prototype or even read upon all the alternatives then there's really only 2 options: ask someone who knows and follow their advice. This may not necessarily mean asking an individual but spend the 2 days searching through blogs and articles to glean enough information to make a slightly-better-than-uninformed decision. Do a little research into all the mainstream options and then pick one. Sometimes leadership means not being afraid to make the wrong decision, its often more important to make a firm decision than to vacillate. You can cover yourself by coming up with architectures that are more decoupled and therefore easier to change - eg a client/server model will allow you to replace your UI technology with another with minimal disruption.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270595", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51345/" ] }
270,799
I was reading a Hacker News thread where one user posts a link from 2011 explaining that IIS is much faster than most other (*nix) web servers. Another user replies, explaining that IIS gets that advantage by having a kernel module called HTTP.sys . To my knowledge, most other popular web servers in 2015 do not do this. I would never want to write a kernel mode web server, because I could never trust myself to make it free of security exploits (which would be less serious running in a lower protection ring). From the perspective of the software engineer (as opposed to a customer for web servers), is running in kernel mode a smart performance decision? Can security concerns be mitigated in application development to the point of making a kernel mode server a net profit for the consumer?
Http.sys is not so much a web server as a proxy-forwarder. Its designed to allow many web servers co-exist on a Windows box, so you can have IIS running a web site, but also several WCF services running with http/REST or SOAP interfaces, all on standard port 80. (this is why you can't run Apache on Windows without a bit of jiggling, Apache hasn't been modified to work with this registration system, shame it wasn't made more transparent to applications and require some quite complex modifications to hook into it). The way it works is that you register a URL with it and the corresponding application, ans when a http request is made on port 80, http.sys accepts it but then passes the request on to whichever application is registered to handle that URL target. I doubt a kernel mode webserver makes any sense - even if socket performance can be improved in this way, in order to perform any useful work, the application logic is still going to be executed in user space, so there's always a transition - you've just shifted it along the callstack a little.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270799", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/165347/" ] }
270,821
Given an HTTP server (e.g. Apache, IIS) and a web application (user code running in the server using PHP, ASP.NET and the likes), which of those can decide which HTTP status code to return for any request? Or rather, is a "web application" to be interpreted as an integral part of "server" as used in the HTTP RFCs? The latter (RFC 7230) is pretty short on that: An HTTP "server" is a program that accepts connections in order to service HTTP requests by sending HTTP responses. Since the actual response is generated by a web application (excluding static resources), I'd say the web application is actual part of the server. However, apart from the eternal "which HTTP status code to use in [arbitrary situation]?" discussions, there also is the discussion "Should web applications even use HTTP status codes?" , which I'd like a normative answer to. See for example the Google Maps REST API . They return 200 for every repsonse, which can be interpreted as "The request ended up at the HTTP server, so that part went OK (200)" . Any application error that happens that is returned in the message body as a JSON status, up till the point of NOT_FOUND . Is this correct? Shouldn't a request for GET /Clients/123 return a 404 when that client doesn't exist? Then what about GET /clients.php?id=123 , assuming clients.php exists? Or does 404 really and only mean "I don't know what you're trying to do, but I'm not going to serve you a resource as there is no resource at (a part of) that URI" , "resource" meaning "a (routed) file or directory", which should only be returned by the server when someone forgot to deploy the ClientService application? Do status code only work for the HTTP part of things, or is a web application part of the server, allowing it to utilize the appropriate status codes where they fit, using HTTP status codes as API response codes?
Http.sys is not so much a web server as a proxy-forwarder. Its designed to allow many web servers co-exist on a Windows box, so you can have IIS running a web site, but also several WCF services running with http/REST or SOAP interfaces, all on standard port 80. (this is why you can't run Apache on Windows without a bit of jiggling, Apache hasn't been modified to work with this registration system, shame it wasn't made more transparent to applications and require some quite complex modifications to hook into it). The way it works is that you register a URL with it and the corresponding application, ans when a http request is made on port 80, http.sys accepts it but then passes the request on to whichever application is registered to handle that URL target. I doubt a kernel mode webserver makes any sense - even if socket performance can be improved in this way, in order to perform any useful work, the application logic is still going to be executed in user space, so there's always a transition - you've just shifted it along the callstack a little.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270821", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35973/" ] }
270,878
I am wondering if there are any pros and cons against this style: private void LoadMaterial(string name) { if (_Materials.ContainsKey(name)) { throw new ArgumentException("The material named " + name + " has already been loaded."); } _Materials.Add( name, Resources.Load(string.Format("Materials/{0}", name)) as Material ); } That method should, for each name , be run only once. _Materials.Add() will throw an exception if it will be called multiple times for the same name . Is my guard, as a result, completely redundant, or there are some less obvious benefits? That's C#, Unity, if anyone is interested.
The benefit is that your "custom" exception has an error message that's meaningful to anyone calling this function without knowing how it's implemented (which in the future might be you!). Granted, in this case they'd probably be able to guess what the "standard" exception meant, but you're still making it clear that they violated your contract, rather than stumbled across some strange bug in your code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270878", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/125127/" ] }
270,898
Let's say I have three resources that are related like so: Grandparent (collection) -> Parent (collection) -> and Child (collection) The above depicts the relationship among these resources like so: Each grandparent can map to one or several parents. Each parent can map to one or several children. I want the ability to support searching against the child resource but with the filter criteria: If my clients pass me an id reference to a grandparent, I want to only search against children who are direct descendants of that grandparent. If my clients pass me an id reference to a parent, I want to only search against children who are direct descendants of my parent. I have thought of something like so: GET /myservice/api/v1/grandparents/{grandparentID}/parents/children?search={text} and GET /myservice/api/v1/parents/{parentID}/children?search={text} for the above requirements, respectively. But I could also do something like this: GET /myservice/api/v1/children?search={text}&grandparentID={id}&parentID=${id} In this design, I could allow my client to pass me one or the other in the query string: either grandparentID or parentID, but not both. My questions are: 1) Which API design is more RESTful, and why? Semantically, they mean and behave the same way. The last resource in the URI is "children", effectively implying that the client is operating on the children resource. 2) What are the pros and cons to each in terms of understandability from a client's perspective, and maintainability from the designer's perspective. 3) What are query strings really used for, besides "filtering" on your resource? If you go with the first approach, the filter parameter is embedded in the URI itself as a path parameter instead of a query string parameter. Thanks!
First As Per RFC 3986 §3.4 (Uniform Resource Identifiers § (Syntax Components)|Query 3.4 Query The query component contains non-hierarchical data that, along with data in the path component (Section 3.3), serves to identify a resource within the scope of the URI's scheme and naming authority (if any). Query components are for retrieval of non-hierarchical data; there are few things more hierarchical in nature than a family tree! Ergo - regardless of whether you think it is "REST-y" or not- in order to conform to the formats, protocols, and frameworks of and for developing systems on the internet, you must not use the query string to identify this information. REST has nothing to do with this definition. Before addressing your specific questions, your query parameter of "search" is poorly named. Better would be to treat your query segment as a dictionary of key-value pairs. Your query string could be more appropriately defined as ?first_name={firstName}&last_name={lastName}&birth_date={birthDate} etc. To answer your specific questions Which API design is more RESTful, and why? Semantically, they mean and behave the same way. The last resource in the URI is "children", effectively implying that the client is operating on the children resource. I don't think this is as clear cut as you seem to believe. None of these resource interfaces are RESTful. The major precondition for the RESTful architectural style is that Application State transitions must be communicated from the server as hypermedia. People have labored over the structure of URIs to make them somehow "RESTful URIs" but the formal literature regarding REST actually has very little to say about this. My personal opinion is that much of the meta-misinformation about REST was published with the intent of breaking old, bad habits. (Building a truly "RESTful" system is actually quite a bit of work. The industry glommed on to "REST" and back-filled some orthogonal concerns with nonsensical qualifications and restrictions. ) What the REST literature does say is that if you are going to use HTTP as your application protocol, you must adhere to the formal requirements of the protocol's specifications and you cannot "make http up as you go and still declare that you are using http"; if you are going to use URIs for identifying your resources, you must adhere to the formal requirements of the specifications regarding URI/URLs. Your question is addressed directly by RFC3986 §3.4, which I have linked above. The bottom line on this matter is that even though a conforming URI is insufficient to consider an API "RESTful", if you want your system to actually be "RESTful" and you are using HTTP and URIs, then you cannot identify hierarchical data through the query string because: 3.4 Query The query component contains non-hierarchical data ...it's as simple as that. What are the pros and cons to each in terms of understandability from a client's perspective, and maintainability from the designer's perspective. The "pros" of the first two is that they are on the right path . The "cons" of the third one is that it appears to be flat out wrong. As far as your understandability and maintainability concerns, those are definitely subjective and depend on the comprehension level of the client developer and the design chops of the designer. The URI specification is the definitive answer as to how URIs are supposed to be formatted. Hierarchical data is supposed to be represented on the path and with path parameters. Non-hierarchical data is supposed to be represented in the query. The fragment is more complicated, because its semantics depend specifically upon the media type of the representation being requested. So to address the "understandability" component of your question, I will attempt to translate exactly what your first two URIs are actually saying. Then, I will attempt to represent what you say you are trying to accomplish with valid URIs. Translation of your verbatim URIs to their semantic meaning /myservice/api/v1/grandparents/{grandparentID}/parents/children?search={text} This says for the parents of grandparents, find their child having search={text} What you said with your URI is only coherent if searching for a grandparent's siblings. With your "grandparents, parents, children" you found a "grandparent" went up a generation to their parents and then came back down to the "grandparent" generation by looking at the parents' children. /myservice/api/v1/parents/{parentID}/children?search={text} This says that for the parent identified by {parentID}, find their child having ?search={text} This is closer to correct to what you are wanting, and represents a parent->child relationship that can likely be used to model your entire API. To model it this way, the burden is placed upon the client to recognize that if they have a "grandparentId", that there is a layer of indirection between the ID they have and the portion of the family graph they are wishing to see. To find a "child" by "grandparentId", you can call your /parents/{parentID}/children service and then foreach child that is returned, search their children for your person identifier. Implementation of your requirements as URIs If you want to model a more extensible resource identifier that can walk the tree, I can think of several ways you can accomplish that. 1) The first one, I've already alluded to. Represent the graph of "People" as a composite structure. Each person has a reference to the generation above it through its Parents path and to a generation below it through its Children path. /Persons/Joe/Parents/Mother/Parents would be a way to grab Joe's maternal grandparents. /Persons/Joe/Parents/Parents would be a way to grab all of Joe's grandparents. /Persons/Joe/Parents/Parents?id={Joe.GrandparentID} would grab Joe's grandparent having the identifier you have in hand. and these would all make sense (note that there could be a performance penalty here depending on task by forcing a dfs on the server due to a lack of branch identification in the "Parents/Parents/Parents" pattern.) You also benefit from having the ability to support any arbitrary number of generations. If, for some reason, you desire to look up 8 generations, you could represent this as /Persons/Joe/Parents/Parents/Parents/Parents/Parents/Parents/Parents/Parents?id={Joe.NotableAncestor} but this leads into the second dominant option for representing this data: through a path parameter. 2) Use path parameters to "query the hierarchy" You could develop the following structure to help ease the burden on consumers and still have an API that makes sense. To look back 147 generations, representing this resource identifier with path parameters allows you to do /Persons/Joe/Parents;generations=147?id={Joe.NotableAncestor} To locate Joe from his Great Grandparent, you could look down the graph a known number of generations for Joe's Id. /Persons/JoesGreatGrandparent/Children;generations=3?id={Joe.Id} The major thing of note with these approaches is that without further information in the identifier and request, you should expect that the first URI is retrieving a Person 147 generations up from Joe with the identifier of Joe.NotableAncestor. You should expect the second one to retrieve Joe. Assume that what you actually want is for your calling client to be able to retrieve the entire set of nodes and their relationships between the root Person and the final context of your URI. You could do that with the same URI (with some additional decoration) and setting an Accept of text/vnd.graphviz on your request, which is the IANA registered media type for the .dot graph representation. With that, change the URI to /Persons/Joe/Parents;generations=147?id={Joe.NotableAncestor.Id}#directed with an HTTP Request Header Accept: text/vnd.graphviz and you can have clients fairly clearly communicate that they want the directed graph of the generational hierarchy between Joe and 147 generations prior where that 147th ancestral generation contains a person identified as Joe's "Notable Ancestor." I'm unsure if text/vnd.graphviz has any pre-defined semantics for its fragment;I could find none in a search for instruction. If that media type actually does have pre-defined fragment information, then its semantics should be followed to create a conforming URI. But, if those semantics are not pre-defined, the URI specification states that the semantics of the fragment identifier are unconstrained and instead defined by the server, making this usage valid. What are query strings really used for, besides "filtering" on your resource? If you go with the first approach, the filter parameter is embedded in the URI itself as a path parameter instead of a query string parameter. I believe I have already thoroughly beaten this to death, but query strings are not for "filtering" resources. They are for identifying your resource from non-hierarchical data. If you have drilled down your hierarchy with your path by going /person/{id}/children/ and you are wishing to identify a specific child or a specific set of children, you would use some attribute that applies to the set you are identifying and include it inside the query.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/270898", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/93933/" ] }
271,139
Some languages (such as C++ and early versions of PHP) don't support the finally part of a try ... catch ... finally construct. Is finally ever necessary? Because the code in it always runs, why wouldn't/shouldn't I just place that code after a try ... catch block without a finally clause? Why use one? (I'm looking for a reason/motivation for using/not using finally , not a reason to do away with 'catch' or why it's legal to do so.)
As others have mentioned, there's no guarantee that code after a try statement will execute unless you catch every possible exception. That said, this: try { mightThrowSpecificException(); } catch (SpecificException e) { handleError(); } finally { cleanUp(); } can be rewritten 1 as: try { mightThrowSpecificException(); } catch (SpecificException e) { try { handleError(); } catch (Throwable e2) { cleanUp(); throw e2; } } catch (Throwable e) { cleanUp(); throw e; } cleanUp(); But the latter requires you to catch all unhandled exceptions, duplicate the cleanup code, and remember to re-throw. So finally isn't necessary , but it's useful . C++ doesn't have finally because Bjarne Stroustrup believes RAII is better , or at least suffices for most cases: Why doesn't C++ provide a "finally" construct? Because C++ supports an alternative that is almost always better: The "resource acquisition is initialization" technique (TC++PL3 section 14.4). The basic idea is to represent a resource by a local object, so that the local object's destructor will release the resource. That way, the programmer cannot forget to release the resource. 1 The specific code to catch all exceptions and rethrow without losing stack trace information varies by language. I have used Java, where the stack trace is captured when the exception is created. In C# you'd just use throw; .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271139", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/126538/" ] }
271,395
We don't do this at our firm, but one of my friends says that his project manager asked every developer to add intentional bugs just before the product goes to QA. This is how it works: Just before the product goes to QA, the development team adds some intentional bugs at random places in the code. They properly back up the original, working code to make sure that those bugs aren't shipped with the end product. Testers are also informed about this. So they will test hard, because they know there are bugs present and that not finding them might be considered as a sign of incompetence. If a bug (intentional or otherwise) has been found, they will be reported for the development team to fix. The development team then adds another intentional bug in a related section of the code just before the product goes to the second-level QA. The project manager says a tester should think like a developer and he/she should expect new bugs in sections where changes were made. Well, this is how it goes. They say that this approach has following advantages. Testers will be always on their toes and they will test like crazy. That helps them to also find hidden (unintentional) bugs so developers can fix them. Testers feed on bugs. Not finding any bugs will affect their morale. So giving them an easy one to find will help their morale. If you ignore the scenario where one of these intentional bugs gets shipped with the final product, what are the other drawbacks we should consider before even thinking of adopting this approach? Some clarifications: They properly backup the original code in source control. When a tester finds the intentional bug, the development team just ignores it. If tester finds out an unintentional (original) bug, the development team first checks whether it is caused by any of the intentional bugs. That is, the development team first tries to reproduce that on the original working code and tries to fix it if they can. Just ignore the relationship issues between QA and development team. I specifically asked this question on Programmers , not on The Workplace . Consider that there is good rapport between QA and the development team, and they party together after working hours. The project manager is a nice, old gentleman who is always ready to support both teams (Godsend).
This sounds absolutely ridiculous. It is expending a great deal of effort for very questionable benefit, and the practice seems based on some faulty premises: That QA won't work hard unless they know they are being tested every day (which cannot be good for morale) That there are not enough unintentionally introduced bugs in the software for QA to find That QA's job is to find bugs - it isn't; it is to ensure the software is production quality That this kind of battle of wits between development and QA is in some way healthy for the company - it isn't; all employees should be working together against the company's competitors instead of each other. It's a terrible idea and the project manager in question is a jerk/idiot who understands nothing about people and motivation. And it's bad for business. To expand on my description of "QA's job:" QA definitely should be finding bugs - both in the code and in their test suites - as an artifact of doing their jobs, but the role shouldn't be defined as "you have to find bugs." It should be "you have to keep the test suites up-to-date to account for new features and ensure all high coverage of testing. If this does not result in finding bugs, then the testing procedures are not sufficiently sophisticated for the product.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271395", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24032/" ] }
271,483
I have been a programmer for almost 1 year. As an ADHD adult, naturally I don't have the same strength of attention on ordinary stuffs as my colleagues do. And I find the catastrophe made by me are usually caused by trivial negligence. Like for today, I found the cron process on the server collapsed in the morning. After half hour of debugging. I found I wrote in the cron * 4 * * * sh daily_task.sh instead of 0 4 * * * sh daily_task.sh Which runs the huge shell 59 times in the morning instead of the intended 1 time. Is there some kind of cultivatable behaviour or some tools or anything that can help me at least reduce such kind of mistake? How do you do to avoid such kind of mistake?
Is there some kind of cultivatable behaviour [...] that can help me at least reduce such kind of mistake Absolutely, it is called four-eyes-principle . If you had you shown your crontab entry to a second person (a person knowing cron, of course), chances are high the mistake would have been avoided. In programming, when it comes to this, people mostly think of code-reviews, but that is actually the same thing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271483", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/132579/" ] }
271,594
I have found the following loop annotation in a big project I am working on (pseudocode): var someOtherArray = []; for (var i = 0, n = array.length; i < n; i++) { someOtherArray[i] = modifyObjetFromArray(array[i]); } What brought my attention is this extra "n" variable. I have never seen a for lop written in this way before. Obviously in this scenario there is no reason why this code couldn't be written in the following way (which I'm very much used to): var someOtherArray = []; for (var i = 0; i < array.length; i++) { someOtherArray[i] = modifyObjetFromArray(array[i]); } But it got me thinking. Is there a scenario when writing such a for loop would make sense? The idea comes to mind that "array" length may change during the for loop execution, but we don't want to loop further than the original size, but I can't imagine such a scenario. Shrinking the array inside the loop does not make much sense either, because we are likely to get OutOfBoundsException. Is there a known design pattern where this annotation is useful? Edit As noted by @Jerry101 the reason is performance. Here is a link to the performance test I have created: http://jsperf.com/ninforloop . In my opinion difference is not big enough unless you are iterating though a very huge array. The code I copied it from only had up to 20 elements, so I think readability in this case outweighs the performance consideration.
The variable n ensures the generated code doesn't fetch the array length for every iteration. It's an optimization that might make a difference in run time depending on the language used, whether the array is actually a collection object or a JavaScript "array", and other optimization details.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271594", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76933/" ] }
271,676
We are using MySQL at the company I work for, and we build both client-facing and internal applications using Ruby on Rails. When I started working here, I ran into a problem what I had never encountered before; the database on the production server is set to Latin-1, meaning that the MySQL gem throws an exception whenever there is user input where the user copies & pastes UTF-8 characters. My boss calls these "bad characters" since most of them are non-printable characters, and says that we need to strip them out. I've found a few ways to do this, but eventually we've ended up in a circumstance where a UTF-8 character was needed. Plus it's a bit of a hassle, especially since it seems like the only solution I ever read about for this issue is to just set the database to UTF-8 (makes sense to me). The only argument that I've heard for sticking with Latin-1 is that allowing non-printable UTF-8 characters can mess up text/full-text searches in MySQL. Is this really true? Are there other reasons one should use Latin-1 over UTF-8? It's my understanding that it is superior and becoming more ubiquitous.
Unicode is certainly difficult, and the UTF-8 encoding has a couple of inconvenient properties. However, UTF-8 has become the de-facto standard encoding on the web, surpassing ASCII, Latin-1, UCS-2 and UTF-16. Just use UTF-8 everywhere . The most important reason why you should support Unicode is that you shouldn't make unnecessary assumptions about user input. I have no idea what your domain is, but things like Hebrew usernames, a blog post about China, a comment with Emoji, or simply well styled text – like “this” – should be possible… Oh, those were typographically correct quotation marks ( “” rather than "" ), en-wide dashes, and an ellipsis, which are characters that are common in English text, but not supported by ASCII or Latin-1. So not supporting other scripts isn't just a big f*ck you to other cultures, but sticking to Latin-1 doesn't even allow you to write proper English. The notion that Unicode only allows “bad characters” is wrong. Yes, text is really complicated, and Unicode won't hide that from you. Your boss may be thinking about composed characters, where one base codepoint such as a is modified by subsequent codepoints that e.g. represent diacritics to form one visual character such as á . This doesn't really get into your way when trying to do searches if you do some kind of normalization. For example, you could store all text in the NFC form which collapses such compositions into their precomposed form if one is available. When doing searching, you could also strip all composing characters from the text, but this may substantially change their meaning in some languages. Unicode also adds a lot of unprintable characters – but even ASCII has loads of them. Will you handle a NUL in the middle of a string? How about 0x1C, a “File Separator”? I've never seen half of those . Latin-1 adds a soft hyphen that indicates word break opportunities, but is otherwise invisible. Does that also break your full-text search? In other words, even ASCII and Latin-1 allow you to completely break your input if you assume it's all just printable text!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271676", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/161281/" ] }
271,680
I'm in the early stages of working on an application that is using the Repository Pattern to provide a data access abstraction. This application will have some form of a simple REST API but I'm not sure how to approach this using repositories. To illustrate this, a typical (very simplified) example of a repository is something like this: <?php $repo = new PostRepository(); // this would be resolved from an IoC container $post = $repo->getByEmail('[email protected]'); or maybe a little less rigid, like this: $post = $repo->getBy('first_name', 'Bob'); But the pattern doesn't seem suited to anything more complicated like, say, the query from this url: api.example.com/posts?firstName=Bob&lastName=Doe&[email protected]&with=comments It seems like to handle stuff like that (let alone any much more complicated queries that many APIs support) you would pretty much end up having to reimplement the underlying ORM, or create a very leaky abstraction, neither of which seems ideal. Am I going about this wrong?
Unicode is certainly difficult, and the UTF-8 encoding has a couple of inconvenient properties. However, UTF-8 has become the de-facto standard encoding on the web, surpassing ASCII, Latin-1, UCS-2 and UTF-16. Just use UTF-8 everywhere . The most important reason why you should support Unicode is that you shouldn't make unnecessary assumptions about user input. I have no idea what your domain is, but things like Hebrew usernames, a blog post about China, a comment with Emoji, or simply well styled text – like “this” – should be possible… Oh, those were typographically correct quotation marks ( “” rather than "" ), en-wide dashes, and an ellipsis, which are characters that are common in English text, but not supported by ASCII or Latin-1. So not supporting other scripts isn't just a big f*ck you to other cultures, but sticking to Latin-1 doesn't even allow you to write proper English. The notion that Unicode only allows “bad characters” is wrong. Yes, text is really complicated, and Unicode won't hide that from you. Your boss may be thinking about composed characters, where one base codepoint such as a is modified by subsequent codepoints that e.g. represent diacritics to form one visual character such as á . This doesn't really get into your way when trying to do searches if you do some kind of normalization. For example, you could store all text in the NFC form which collapses such compositions into their precomposed form if one is available. When doing searching, you could also strip all composing characters from the text, but this may substantially change their meaning in some languages. Unicode also adds a lot of unprintable characters – but even ASCII has loads of them. Will you handle a NUL in the middle of a string? How about 0x1C, a “File Separator”? I've never seen half of those . Latin-1 adds a soft hyphen that indicates word break opportunities, but is otherwise invisible. Does that also break your full-text search? In other words, even ASCII and Latin-1 allow you to completely break your input if you assume it's all just printable text!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271680", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/166338/" ] }
271,881
I just used ~1 billion as the count for a z-index in CSS, and was thinking about the comparisons that must go on. Is there a difference in performance on the ALU level in comparisons between very large numbers vs very small ones? For example, would one of these two snippets be more expensive than the other? snippet 1 for (int i = 0; i < 10000000; i++){ if (i < 10000000000000) { //do nothing } } snippet 2 for (int i = 0; i < 10000000; i++){ if (i < 1000) { //do nothing } }
Every processor I've worked on does comparison by subtracting one of the operands from the other, discarding the result and leaving the processor's flags (zero, negative, etc.) alone. Because subtraction is done as a single operation, the contents of the operands don't matter. The best way to answer the question for sure is to compile your code into assembly and consult the target processor's documentation for the instructions generated. For current Intel CPUs, that would be the Intel 64 and IA-32 Architectures Software Developer’s Manual . The description of the CMP ("compare") instruction is in volume 2A, page 3-126, or page 618 of the PDF, and describes its operation as: temp ← SRC1 − SignExtend(SRC2); ModifyStatusFlags; (* Modify status flags in the same manner as the SUB instruction*) This means the second operand is sign-extended if necessary, subtracted from the first operand and the result placed in a temporary area in the processor. Then the status flags are set the same way as they would be for the SUB ("subtract") instruction (page 1492 of the PDF). There's no mention in the CMP or SUB documentation that the values of the operands have any bearing on latency, so any value you use is safe.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271881", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100972/" ] }
271,923
It seems that all new programming languages or at least the ones that became popular use type inference. Even Javascript got types and type inference though various implementations (Acscript, typescript etc). It looks great to me but I'm wondering if there are any trade-offs or why let's say Java or the old good languages don't have type inference When declaring a variable in Go without specifying its type (using var without a type or the := syntax), the variable's type is inferred from the value on the right hand side. D allows writing large code fragments without redundantly specifying types, like dynamic languages do. On the other hand, static inference deduces types and other code properties, giving the best of both the static and the dynamic worlds. The type inference engine in Rust is pretty smart. It does more than looking at the type of the r-value during an initialization. It also looks how the variable is used afterwards to infer its type. Swift uses type inference to work out the appropriate type. Type inference enables a compiler to deduce the type of a particular expression automatically when it compiles your code, simply by examining the values you provide.
Haskell's type system is fully inferrable (leaving aside polymorphic recursion, certain language extensions , and the dreaded monomorphism restriction ), yet programmers still frequently provide type annotations in the source code even when they don't need to. Why? Type annotations serve as documentation . This is especially true with types as expressive as Haskell's. Given a function's name and its type you can usually have a pretty good guess at what the function does, especially when the function is parametrically polymorphic. Type annotations can drive development . Writing a type signature before you write the body of a function feels kind-of like test-driven development. In my experience, once you make a Haskell function compile it often works first time. (Of course, this does not obviate the need for automated tests!) Explicit types can help you check your assumptions . When I'm trying to understand some code that already works, I frequently pepper it with what I believe to be the correct type annotations. If the code still compiles I know I've understood it. If it doesn't, I read the error message. Type signatures let you specialise polymorphic functions . Very occasionally, an API is more expressive or useful if certain functions are not polymorphic. The compiler won't complain if you give a function a less general type than would be inferred. The classic example is map :: (a -> b) -> [a] -> [b] . Its more general form ( fmap :: Functor f => (a -> b) -> f a -> f b ) applies to all Functor s, not just lists. But it was felt that map would be easier to understand for beginners, so it lives on alongside its bigger brother. On the whole, the downsides of a statically-typed-but-inferrable system are much the same as the downsides of static typing in general, a well-worn discussion on this site and others (Googling "static typing disadvantages" will get you hundreds of pages of flame-wars). Of course, some of said disadvantages are ameliorated by the smaller quantity of type annotations in an inferrable system. Plus, type inference has its own advantages: hole-driven development wouldn't be possible without type inference. Java* proves that a language requiring too many type annotations gets annoying, but with too few you lose out on the advantages I described above. Languages with opt-out type inference strike an agreeable balance between the two extremes. *Even Java, that great scapegoat, performs a certain amount of local type inference. In a statement like Map<String, Integer> = new HashMap<>(); , you don't have to specify the generic type of the constructor. On the other hand, ML-style languages are typically globally inferrable.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/271923", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/123524/" ] }
272,061
From what I have read: The reason is because it is not easy to determine which method will actually be called as we have inheritance. However, why doesn't Java at least have tail-recursion optimization for static methods and enforce proper way to call static methods with the compiler? Why doesn't Java have any support at all for tail-recursion? I am not sure if there is any difficulty here at all. Regarding the suggested duplicate , as explained by Jörg W Mittag 1 : The other question asks about TCO, this one about TRE. TRE is much simpler than TCO. Also, the other question asks about what limitations the JVM imposes on language implementations that wish to compile to the JVM, this question asks about Java, which is the one language that is not restricted by the JVM, since the JVM spec can be changed by the same people who design Java. And lastly, there isn't even a restriction in the JVM about TRE, because the JVM does have intra-method GOTO, which is all that's needed for TRE 1 Formatting added to call out points made.
As explained by Brian Goetz (Java Language Architect at Oracle) in this video : in jdk classes [...] there are a number of security sensitive methods that rely on counting stack frames between jdk library code and calling code to figure out who's calling them. Anything that changed the number of frames on the stack would break this and would cause an error. He admits this was a stupid reason, and so the JDK developers have since replaced this mechanism. He further then mentions that it's not a priority, but that tail recursion will eventually get done. N.B. This applies to HotSpot and the OpenJDK, other VMs may vary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272061", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134583/" ] }
272,085
I just came across the following in a lab manual at university: You do need to study the interfaces of the classes by generating the javadoc for them so you know what operations are provided (feel free to look at the code, but when using somebody else’s code, as here, you should work from the javadoc rather than the code whenever possible). I don't understand why this is the case; since the javadoc could be out of date, or could describe the function of the code badly. Surely looking at the source code, and reading the javadoc comments is best? Is there a reason why, or a case when reading only the javadoc is the best thing to do?
The recommendation is probably about programming to an interface rather than the implementation . Sure, if you have access to the code then there's nothing stopping you from looking at the implementation to understand how it works. But you should always make sure that the how doesn't influence your consumption of the API. When you're consuming an API you're working against an abstraction. Try to concern yourself only with what the API offers (the contract) and not the how (the implementation). This is because there is no guarantee that an API's implementation won't change drastically from one version to the next, even if the contract has remained unchanged.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272085", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74618/" ] }
272,226
I have always worked on projects where caching was done on DAL, basically just when you are about to make the call to database, it checks if data is already there in the cache and if it is, it just doesn't make the call and instead returns that data. I just recently read about caching at business layer, so basically caching the entire business objects. One advantage I can see straight away is much better response times. When would you prefer one over the other? and Is caching at Business Layer a common practice?
This is probably too broad for a definitive answer. Personally, I feel that a data access layer is the better place for caching, simply because it is supposed to be very simple - records go in and out and that's it. A business layer implements many additional rules of higher complexity, so it's better if it doesn't also have to manage per-object availability concerns in addition to multiple-object consistency concerns in the same class(or even the same method) - that would be a blatant violation of the SRP. (Of course, I only reached that insight after my service classes had grown to unmanageable complexity when they tried to do both caching and configuration simultaneously. There is no better teacher than experience, but the price sure is steep.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272226", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/166270/" ] }
272,433
I've learned C# over the course of the past six months or so and am now delving into Java. My question is about instance creation (in either language, really) and it's more of: I wonder why they did it that way. Take this example Person Bob = new Person(); Is there a reason that the object is specified twice? Would there ever be a something_else Bob = new Person() ? It would seem if I were following on from convention it would be more like: int XIsAnInt; Person BobIsAPerson; Or perhaps one of these: Person() Bob; new Person Bob; new Person() Bob; Bob = new Person(); I suppose I'm curious if there's a better answer than "that's just the way it is done".
Would there ever be a something_else Bob = new Person()? Yes, because of inheritance. If: public class StackExchangeMember : Person {} Then: Person bob = new StackExchangeMember(); Person sam = new Person(); Bob is a person too, and by golly, he doesn't want to be treated differently than anyone else. Further, we could endow Bob with super powers: public interface IModerator { } public class StackOverFlowModerator : StackExchangeMember, IModerator {} IModerator bob = new StackOverFlowModerator(); And so, by golly, he won't stand for being treated any differently than any other moderator. And he likes to sneak around the forum to keep everyone in line while incognito: StackExchangeMember bob = new StackOverFlowModerator(); Then when he finds some poor 1st poster, he throws off his cloak of invisibility and pounces. ((StackOverFlowModerator) bob).Smite(sam); And then he can act all innocent and stuff afterwards: ((Person) bob).ImNotMeanIWasJustInstantiatedThatWay();
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272433", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/167235/" ] }
272,580
I am being taught by my boss (I just finished school and he wanted someone with a little programming experience, so he chose me to train me on what that company specializes in) and started working with ASP.NET MVC applications, some HTML and CSS. I'm fine with the web design stuff he gives me (it is pretty simple to understand without clarification). But for instance, he gives me a task to do with ASP.NET MVC, he explains it really well. But he doesn't explain anything in the code he has just given me. (We use source control in Visual Studio 2013 ), so it's literally hundreds of lines of code, without any background on what it is supposed to do. The kind of code that I'm seeing is code I've never seen before, so it is really difficult to try and figure out. I would try and ask him more questions, but he is always working (it's his own business), and I feel as though he might get annoyed with all these questions I have on my hands. So just something that will help my out until I get a grip on things, how can I ask my boss to put comments into his code that he gives me, but politely?
You're in the 'deep end' and, in my opinion, that's the best way to learn. Not because you're looking at stuff you don't have a clue about, but because it forces you to be more resourceful and find out what components play which role in a system you're new to. It doesn't help that your boss is too busy to handle somebody who is inquisitive (and you're totally within your rights to be inquisitive; you're keen to learn, which is good). But, unfortunately, asking your senior to change their style and approach for the sake of your learning may not go down too well, especially since you're dealing with somebody you say is busy. Being sat in front of thousands of lines of code you're not familiar with is the norm. You can't always have it explained in black and white with comments. However for the sake of learning while you're new to it, if you feel you definitely have to ask him for comments - maybe explain why. Explain it's because of the fact you don't want to bother him with questions as he's often busy. Not only will this come across a lot less like you're telling him to do something, but it also opens the floor to discussions on how he might, instead, prefer to put question asking time aside.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272580", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/166919/" ] }
272,610
I've been told "User Stories are not requirements, it is just a reminder of what customer wants, you cannot put requirements within a story". But let's take for an example that a customer wants a different processing for different credit cards. There are strict requirements that must be implemented and known so that test cases can be written. Where should requirements go if not in the user story? How can developers develop from a story if there are no lower requirements? How testers can write test cases (detailed ones) based on a user story? Where to requirements like DB constraints, fields validation etc. live outside of the user story?
This answer will focus on how to work with User Stories and lower level requirements. I won't be discussing the virtues, or lack thereof, of Scrum or Agile. I won't be talking about gurus either. This answer assumes you're on board with Scrum but haven't yet found a way to make it work for you. As others have mentioned, User Stories are meant to cover how the Users would like the software to be. Because the Users don't care about low level implementation stuff like database tables, constraints, architectural patterns, etc, you won't find such details in a User Story. However, that doesn't mean these details should not be recorded anywhere. When developers implement User Stories they need to be aware of lower level details typical Users won't know. This information can come from SMEs, BAs, the Product Owner, your architect, or any other expert or technically minded person. Does this mean low level details should be recorded in User Stories? No (and yes). At some point between the time the story is created and implemented somebody will need to work out how to implement it. This usually takes the form of conversations with the people involved in the Story (User, architect, developer, etc). These conversations should result in unambiguous Acceptance Criteria which clearly delineate the scope of the User Story's implementation. These details will need to be recorded somewhere and where that is is really up to you. The key here is that these details are elicited after the User Story has been created. I think this is what your guru is trying to emphasise. As a developer it is clear that you'll need a way to associate more specific requirements with your User Story. Just how you do that is entirely up to your team. If people on your team want to keep these details out of the User Stories then you may need to respect that. There are benefits to keeping your high level User Stories free of implementation details. It keeps them lean and your backlog can be read as a history of what your Users and Product Owner wanted. Just make your needs as a developer known as well. You should be able to work out a compromise where simply linking to the User Story keeps everyone happy.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272610", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/144171/" ] }
272,669
There is a problem with the way I code. Regardless of how much of a plan I write beforehand, the code becomes overcomplicated quickly. Reading books on good practice and attempting to adhere to their principles isn't working. Is there some surefire/proven way of auditing code to simplify it, other than being more thorough and dedicated in the planning stage?
Practice. No, really. You will never fully appreciate a technique in a book or blog post until you've written your first plate of spaghetti or big ball of mud, and realize that many of these techniques are just effective ways to tame that complexity. Then you will understand them at a deeper level, much deeper than the cargo cultist who is just stringing patterns together to look cool, but doesn't fully understand the reasons behind them. Check out the patterns. Are any of them easy? Well, they're definitely easier than writing a plate of spaghetti or a big ball of mud, and having to maintain it later. But you still have to make a study of these patterns. Even the best painter uses more than one brush.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272669", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/126538/" ] }
272,869
In an event-driven architecture each component only acts when an event is sent through the system. Imagine a hypothetical car with a brake pedal and a brake light. The brake light turns on when it receives a brake_on event, and off when it receives a brake_off event. The brake pedal sends a brake_on event when it is pressed down, and a brake_off event when it is released. This is all well and good, until you have the situation where the car is turned on with the brake pedal already pressed down . Since the brake light never received a brake_on event, it will stay off - clearly an undesirable situation. Turning the brake light on by default only reverses the situation. What could be done to resolve this 'initial state problem'? EDIT: Thank you for all the responses. My question was not about an actual car. In cars they solved this problem by continuously sending the state - therefore there is no startup issue in that domain. In my software domain, that solution would use many unnecessary CPU cycles. EDIT 2: In addition to @gbjbaanb's answer , I'm going for a system in which: the hypothetical brake pedal, after initialization, sends an event with its state, and the hypothetical brake light, after initialization, sends an event requesting a state event from the brake pedal. With this solution, there are no dependencies between components, no race conditions, no message queues to go stale, and no 'master' components.
There are many ways to do this, but I prefer to keep a message-based system as decoupled as possible. This means the overall system cannot read the state of any component, nor any component read the state of any other (as that way lies spaghetti ties of dependancies). So, while the running system will look after itself, we need a way to tell each component to start itself up, and we already have such a thing in the component registration, ie at startup the core system has to inform each component that it is now registered (or will ask each component to return its details so it can be registered). This is the stage at which the component can perform its startup tasks, and can send messages as it would do in normal operation. So the brake pedal, when the ignition is started, would receive a registration/check message from the car management and it would return not only a "I'm here and working" message, but it would then check its own state and send the messages for that state (eg a pedal depressed message). The problem then becomes one of startup dependancies, as if the brake light is not yet registered then it will not receive the message, but this is easily resolved by queuing all of these messages until the core system has completed its startup, registration and check routine. The biggest benefit is that there is no special code required to handle initialisation except that you already have to write (ok, if your message-sending for brake pedal events is in a brake-pedal handler you will have to call that in your initialisation too, but that's usually not a problem unless you've written that code tied heavily to the handler logic) and no interaction between components except those that they already send to each other as normal. Message passing architectures are very good because of this!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/272869", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/92543/" ] }
273,029
I just had a discussion over a design choice after a code review. I wonder what your opinions are. There's this Preferences class, which is a bucket for key-value pairs. Null values are legal (that's important). We expect that certain values may not be saved yet, and we want to handle these cases automatically by initializing them with predefined default value when requested. The discussed solution used following pattern (NOTE: this is not the actual code, obviously - it's simplified for illustrative purposes): public class Preferences { // null values are legal private Map<String, String> valuesFromDatabase; private static Map<String, String> defaultValues; class KeyNotFoundException extends Exception { } public String getByKey(String key) { try { return getValueByKey(key); } catch (KeyNotFoundException e) { String defaultValue = defaultValues.get(key); valuesFromDatabase.put(key, defaultvalue); return defaultValue; } } private String getValueByKey(String key) throws KeyNotFoundException { if (valuesFromDatabase.containsKey(key)) { return valuesFromDatabase.get(key); } else { throw new KeyNotFoundException(); } } } It was criticized as an anti-pattern - abusing exceptions to control the flow . KeyNotFoundException - brought to life for that one use case only - is never going to be seen out of the scope of this class. It's essentially two methods playing fetch with it merely to communicate something to each other. The key not being present in the database isn't something alarming, or exceptional - we expect this to occur whenever a new preference setting is added, hence the mechanism that gracefully initializes it with a default value if needed. The counterargument was that getValueByKey - the private method - as defined right now has no natural way of informing the public method about both the value, and whether the key was there. (If it wasn't, it has to be added so that the value can be updated). Returning null would be ambiguous, since null is a perfectly legal value, so there's no telling whether it meant that the key wasn't there, or if there was a null . getValueByKey would have to return some sort of a Tuple<Boolean, String> , the bool set to true if the key is already there, so that we can distinguish between (true, null) and (false, null) . (An out parameter could be used in C#, but that's Java). Is this a nicer alternative? Yes, you'd have to define some single-use class to the effect of Tuple<Boolean, String> , but then we're getting rid of KeyNotFoundException , so that kind of balances out. We're also avoiding the overhead of handling an exception, although it's not significant in practical terms - there are no performance considerations to speak of, it's a client app and it's not like user preferences will be retrieved millions of times per second. A variation of this approach could use Guava's Optional<String> (Guava is already used throughout the project) instead of some custom Tuple<Boolean, String> , and then we can differentiate between Optional.<String>absent() and "proper" null . It still feels hackish, though, for reasons that are easy to see - introducing two levels of "nullness" seem to abuse the concept that stood behind creating Optional s in the first place. Another option would be to explicitly check whether the key exists (add a boolean containsKey(String key) method and call getValueByKey only if we have already asserted that it exists). Finally, one could also inline the private method, but the actual getByKey is somewhat more complex than my code sample, thus inlining would make it look quite nasty. I may be splitting hairs here, but I'm curious what you would bet on to be closest to best practice in this case. I didn't find an answer in Oracle's or Google's style guides. Is using exceptions like in the code sample an anti-pattern, or is it acceptable given that alternatives aren't very clean, either? If it is, under what circumstances would it be fine? And vice versa?
Yes, your colleague is right: that is bad code. If an error can be handled locally, then it should be handled immediately. An exception should not be thrown and then handled immediately. This is much cleaner then your version (the getValueByKey() method is removed) : public String getByKey(String key) { if (valuesFromDatabase.containsKey(key)) { return valuesFromDatabase.get(key); } else { String defaultValue = defaultValues.get(key); valuesFromDatabase.put(key, defaultvalue); return defaultValue; } } An exception should be thrown only if you do not know how to resolve the error locally.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273029", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29029/" ] }
273,156
I've recently been trying to get into open source collaboration in GitHub and have run into a situation for which I am curious what is the preferred way to proceed. About a month ago, I found a project on GitHub for a library that I had already been using for a while and in which I had found (and fixed) a few bugs. As an initial foray into GitHub collaboration, I found the repo that seemed to have the highest volume of recent activity, fixed one bug, added unit tests, pushed up to GitHub, and made a pull request. Within a few hours, the maintainer of the repo that I forked had accepted the PR and merged in a few other PRs from other people that had been waiting as well. Spurred on by this, I fixed three more bugs that I had found, each in a separate branch of my own repo, and filed an issue and pull request for each one separately. That was just over a month ago, and the pull requests have been sitting there, untouched, ever since. The user whose repo I had forked doesn't seem to be very active, having only made 7 total contributions on GitHub in the past year, and that repo hasn't had any commits since that first pull request I made. So my question: How does one proceed in this situation? Ideally, I would like to avoid creating fragmentation of the library by going off and making a whole bunch of changes in my own repo that are not merged into the parent repo. Nonetheless, I would like to continue making bug fixes and adding features, but if I merge everything into my master branch and base all new fixes off of that branch, then if the maintainer of the repo that I forked ever does come back, I won't be able to split all of the changes into separate pull requests for each feature/bug fix (I've read that pull requests should generally be one pull request per feature or bug fix). Should I keep one branch that is in step with the original repo, base all of my new branches off of that one, and then keep all the commits merged in my master branch? It seems like that would leave me with a whole ton of branches and an increasingly burdensome task every time I need to merge new changes into my master branch. What is the typical way that one would approach a situation like this? It seems to be fairly common that a project will just become abandoned with the original contributors not around to review new pull requests. Is this a situation where somebody should just take up the helm and run with it? It seems like it would create fragmentation if the original contributors ever come back and want to work on the project again.
I haven't had this situation yet, but that's what I would try: Try contacting the owner Maybe they really lost interest, but are willing to transfer the project to somebody else, in particular someone who has already shown considerate commitment. But perhaps they are just occupied with something else (work, vacations, illness, other projects) and didn't have time to handle your PR, but plan to do so later. Or maybe they have really stopped work on the project permanently for whatever reason. Without asking, you won't find it out. Get in touch with the community Surely there are other people who have contributed to, or at least used the project. Check who has forked the project (even if they haven't made any change, they might still be interested in seeing this project thriving); check who has reported issues, or commented upon them. Maybe there's also a community outside GitHub, e.g a mailing list, forum, or StackOverflow members. I'm you eventually really take over the project, you might want their support. And they need to know where the new master repository is. Continue to make good pull requests This shows both the owner and the community that you are serious about it, and let's them judge your contributions.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273156", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/78081/" ] }
273,302
In many resources I found "scope" and "namespaces" are used interchangeably, which seems a bit confusing since they mean different things. Scope defines the region of the code where a name is available. The LEGB rule defines the way names are looked up. Namespace is a place where you look up names. Then I read: "names are bind to a namespace according to where they are assigned..." (which I believe is the deal with scopes in lexical scoping). "functions add an extra namespace layer to your programs" [ ref. ] (don't they add a extra local scope?) "all the names assigned inside a function definition are put in the local scope (the namespace associated with the function call)." "global scope—that is, a namespace in which variables created (assigned) at the top level of the module file live." *all of the quotes are from learning python 5th edition ch17 Are namespaces in Python the way scopes are implemented? Are they the same thing? Can anyone enlighten me?
A namespace is a dictionary, mapping names (as strings) to values. When you do an assignment, like a = 1 , you're mutating a namespace. When you make a reference, like print(a) , Python looks through a list of namespaces to try and find one with the name as a key. A scope defines which namespaces will be looked in and in what order. The scope of any reference always starts in the local namespace, and moves outwards until it reaches the module's global namespace, before moving on to the builtins (the namespace that references Python's predefined functions and constants, like range and getattr ), which is the end of the line. Imagine you have a function named inner , nested within a global function named outer , and inner contains a reference to a name. Python first looks in the inner namespace. If the name's not there, Python then looks in the outer namespace. If that fails, Python tries the module's global namespace, then the builtin namespace, eventually throwing a NameError if the name isn't found. When we say x is in a function's namespace, we mean it is defined there, locally within the function. When we say x is in the function's scope, we mean x is either in the function's namespace or in any of the outer namespaces that the function's namespace is currently nested within. Whenever you define a function, you create a new namespace and a new scope. The namespace is the new, local hash of names. The scope is the implied chain of namespaces that starts at the new namespace, then works its way through any outer namespaces (outer scopes), up to the global namespace (the global scope), and on to the builtins. The terms can be used almost interchangeably, but that's not because they mean the same thing; it's because they overlap a lot in what they imply.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273302", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/154761/" ] }
273,305
Stroustrup says "Don’t immediately invent a unique base for all of your classes (an Object class). Typically, you can do better without it for many/most classes." (The C++ Programming Language Fourth Edition, Sec 1.3.4) Why is a base-class-for-everything generally a bad idea, and when does it make sense to create one?
Because what would that object have for functionality? In java all the Base class has is a toString, a hashCode & equality and a monitor+condition variable. ToString is only useful for debugging. hashCode is only useful if you want to store it in a hash-based collection (the preference in C++ is to pass a hashing function to the container as template param or to avoid std::unordered_* altogether and instead use std::vector and plain unordered lists). equality without a base object can be helped at compile time, if they don't have the same type then they cannot be equal. In C++ this is a compile time error. the monitor and condition variable is better included explicitly in a case by case basis. However when there is more that it needs to do then there is a use-case. For example in QT there is the root QObject class which forms the basis of thread affinity, parent-child ownership hierarchy and signal slots mechanism. It also forces use by pointer for QObjects, However many Classes in Qt don't inherit QObject because they have no need for the signal-slot (particularly the value types of some description).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273305", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/130255/" ] }
273,581
I have been programming in c/c++ for a while now, and I have never once used a makefile. I know that it is supposed to be useful when you are dealing with a large project, but I have always been able to compile, build and execute my program using only gcc main.c -o run and if I am working with an external library (gtk+, for example) I make a simple bash script that looks like: #!/bin/bash echo compiling.... gcc main.c `pkg-config --cflags --libs gtk+-3.0` -o run So what I'm asking is do I need to learn how to make a makefile?
First, a Makefile for make is really useful when you build a program from several translation units (i.e. several *.c or *.cc files which are #include -ing some other header files) which are linked together (it is not very useful for a single source file tiny program). It organizes the various compilation steps of the translation units (and avoid running useless compilations again). See also this answer to a related question. During the debugging phase, compile with all warnings and debug info (e.g. gcc -Wall -Wextra -g for C code compiled by GCC ...) and learn how to use your debugger , e.g. GDB and memory leak detectors like valgrind .... Be scared of undefined behavior . With a recent GCC 5 compiler (at end of 2015), you may also want to use (occasionally) various -fsanitize= debugging options . Have some good testing procedures. For production and benchmarking, ask for compiler optimizations (e.g. add -march=native -O2 to your GCC compiler flags). For C++11 code, replace gcc with g++ -std=c++11 ; in your GNU Makefile , use CC & CFLAGS variables for C and CXX & CXXFLAGS for C++. As soon as you are writing some not-tiny program in C or C++ (e.g. more than ten thousand lines of source code), you'll want to organize it in several translation units (at least to avoiding very long build times while working on it, and preferably to group together related code or features). Notice that with C++ the usual included header files are themselves quite big, so having tiny translation units slows the overall build process. The size, organization, name, and purpose of a translation unit (i.e. of a foo.c or bar.cc file) is a matter of habits and conventional. Some people prefer having many tiny files (of a few dozen lines). I like having source files of several thousand lines. The recent version of GCC compiler, and the recent Linux kernel have dozens of human-written C or C++ source files bigger than ten thousand lines. Both are quite large software (many millions of source lines) Then, you could build such a program using some other builder (like omake or scons , or ninja etc...), or even using a shell script (BTW, the GNU make distribution contains a shell script to build it on systems without any make yet!). But yes, I believe you should learn GNU make (and you may even want to take advantage of recent GNU make features, e.g. Guile scriptability ). Here is an example of Makefile , and another one. Notice that for historical reasons the tab character is significant for make , so you need a specific mode in your editor. In many cases, e.g. for configuration reasons, the Makefile is generated (e.g. with autoconf or cmake ). In several cases some *.c files or some header *.h included by them are generated by other programs (like SWIG , GNU bison , etc...) You should look into the source code of several free software programs (e.g. GNU make itself ! See also sourceforge , github , etc... to find some) and try building them. That would teach you a lot. So you don't need make (or some other builder) yet, just because your programs are very tiny. As soon as they will grow, you'll need some building process. Notice that large programs (a web browser, an optimizing compiler, an OS kernel) have often many millions lines of source code organized in at least hundreds of translation units, and often have some generated C/C++ code (by some script in awk , python , guile , etc..., or a specialized program itself coded in C++, or an external generator like ANTLR or gperf ) for application-specific metaprogramming or aspect-oriented programming purposes. PS. Some other programming languages (Ocaml, Haskell, Go, SML, ...) know about modules and have very different builders.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273581", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/168415/" ] }
273,698
C is one of the most widely-used languages in the world. It accounts for a huge proportion of existing code and continues to be used for a vast amount of new code. It's beloved by its users, it's so widely ported that being able to run C is to many the informal definition of a platform , and is praised by its fans for being a "small" language with a relatively clean set of features. So where are all the compilers? On the desktop, there are (realistically) two : GCC and Clang. Thinking about it for a few seconds you'll probably remember Intel exists as well. There are a handful of others, far too obscure for the average person to name and almost universally not bothering to support a recent language version (or often even a well-defined language subset, just "a subset"). Half of the members of this list are historical footnotes; most of the rest are very specialized and still don't actually implement the full language. Very few actually seem to be open-source. Scheme and Forth - other small languages that are beloved by their fans for it - probably have more compilers than actual users. Even something like SML has more "serious" implementations to choose between than C. Whereas the announcement of a new (unfinished) C compiler aiming at verification actually sees some pretty negative responses, and veteran implementations struggle to get enough contributors to even catch up to C99. Why? Is implementing C so hard? It isn't C++. Do users simply have a very skewed idea about what complexity group it falls in (i.e. that it actually is closer to C++ than Scheme)?
Today, you need a real C compiler to be an optimizing compiler , notably because C is no longer a language close to the hardware, because current processors are incredibly complex ( out-of-order , pipelined , superscalar , with complex caches & TLB , hence needing instruction scheduling , etc...). Today's x86 processors are not like i386 processors of the previous century, even if both are able to run the same machine code. See the C is not a low level language (Your computer is not a fast PDP-11) paper by David Chisnall. Few people are using naive non-optimizing C compilers like tinycc or nwcc , since they produce code which is several times slower than what optimizing compilers can give. Coding an optimizing compiler is difficult. Notice that both GCC and Clang are optimizing some "source language-neutral" code representation (Gimple for GCC, LLVM for Clang). The complexity of a good C compiler is not in the parsing phase! In particular, making a C++ compiler is not much harder than making a C compiler: parsing C++ and transforming it into some internal code representation is complex (because the C++ specification is complex), but is well understood, but the optimization parts are even more complex (inside GCC: the middle-end optimizations, source-language and target-processor neutral, form the majority of the compiler, with the rest being balanced between front-ends for several languages and back-ends for several processors). Hence most optimizing C compilers are also able to compile some other languages, like C++, Fortran, D, ... The C++ specific parts of GCC are about 20% of the compiler... Also, C (or C++) is so widely used that people expect their code to be compilable even when it does not exactly follow the official standards, which do not define precisely enough the semantics of the language (so each compiler may have its own interpretation of it). Look also into the CompCert proved C compiler, and the Frama-C static analyzer, which care about more formal semantics of C. And optimizations are a long-tail phenomenon: implementing a few simple optimizations is easy, but they won't make a compiler competitive! You need to implement a lot of different optimizations, and to organize and combine them cleverly, to get a real-world compiler that is competitive. In other words, a real-world optimizing compiler has to be a complex piece of software. BTW, both GCC and Clang/LLVM have several internal specialized C/C++ code generators. And both are huge beasts (several millions of source lines of code, with a growth rate of several percent each year) with a large developer community (a few hundred persons, working mostly full-time, or at least half-time). Notice that there is no (to the best of my knowledge) multi-threaded C compiler, even if some parts of a compiler could be run in parallel (e.g. intra-procedural optimization, register allocation, instruction scheduling... ). And parallel build with make -j is not always enough (especially with LTO ). Also, it is difficult to get funded on coding a C compiler from scratch, and such an effort needs to last several years. Finally, most C or C++ compilers are free software today (there is no longer a market for new proprietary compilers sold by startups) or at least are monopolistic commodities (like Microsoft Visual C++ ), and being a free software is nearly required for compilers (because they need contributions from many different organizations). I'd be delighted to get funding to work on a C compiler from scratch as free software, but I am not naive enough to believe that is possible today!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273698", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
273,706
From How Linux Works, by Brian Ward, I saw "SCSI Host Adapter", "SATA Host Driver", "Disk Driver (sd)", "CD/DVD Driver (sr)", "USB Host Driver", and "USB Storage Driver". What does "host" mean in the words? Why isn't "guest" used? what differences are between when "host" appears before "driver" and when it doesn't? what differences are between when "host" appears before "adapter" and when it doesn't? Thanks.
Today, you need a real C compiler to be an optimizing compiler , notably because C is no longer a language close to the hardware, because current processors are incredibly complex ( out-of-order , pipelined , superscalar , with complex caches & TLB , hence needing instruction scheduling , etc...). Today's x86 processors are not like i386 processors of the previous century, even if both are able to run the same machine code. See the C is not a low level language (Your computer is not a fast PDP-11) paper by David Chisnall. Few people are using naive non-optimizing C compilers like tinycc or nwcc , since they produce code which is several times slower than what optimizing compilers can give. Coding an optimizing compiler is difficult. Notice that both GCC and Clang are optimizing some "source language-neutral" code representation (Gimple for GCC, LLVM for Clang). The complexity of a good C compiler is not in the parsing phase! In particular, making a C++ compiler is not much harder than making a C compiler: parsing C++ and transforming it into some internal code representation is complex (because the C++ specification is complex), but is well understood, but the optimization parts are even more complex (inside GCC: the middle-end optimizations, source-language and target-processor neutral, form the majority of the compiler, with the rest being balanced between front-ends for several languages and back-ends for several processors). Hence most optimizing C compilers are also able to compile some other languages, like C++, Fortran, D, ... The C++ specific parts of GCC are about 20% of the compiler... Also, C (or C++) is so widely used that people expect their code to be compilable even when it does not exactly follow the official standards, which do not define precisely enough the semantics of the language (so each compiler may have its own interpretation of it). Look also into the CompCert proved C compiler, and the Frama-C static analyzer, which care about more formal semantics of C. And optimizations are a long-tail phenomenon: implementing a few simple optimizations is easy, but they won't make a compiler competitive! You need to implement a lot of different optimizations, and to organize and combine them cleverly, to get a real-world compiler that is competitive. In other words, a real-world optimizing compiler has to be a complex piece of software. BTW, both GCC and Clang/LLVM have several internal specialized C/C++ code generators. And both are huge beasts (several millions of source lines of code, with a growth rate of several percent each year) with a large developer community (a few hundred persons, working mostly full-time, or at least half-time). Notice that there is no (to the best of my knowledge) multi-threaded C compiler, even if some parts of a compiler could be run in parallel (e.g. intra-procedural optimization, register allocation, instruction scheduling... ). And parallel build with make -j is not always enough (especially with LTO ). Also, it is difficult to get funded on coding a C compiler from scratch, and such an effort needs to last several years. Finally, most C or C++ compilers are free software today (there is no longer a market for new proprietary compilers sold by startups) or at least are monopolistic commodities (like Microsoft Visual C++ ), and being a free software is nearly required for compilers (because they need contributions from many different organizations). I'd be delighted to get funding to work on a C compiler from scratch as free software, but I am not naive enough to believe that is possible today!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273706", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/699/" ] }
273,962
I just received a quite rude bug report. The user basically says that we're doing it all wrong using capital letters here and there, although he's in fact just pointing at one bug. On one hand, I care a lot about our users and want to maintain a good relationship and a good rating of our app. On the other I'd feel like a complete sell out if I would reply overly polite. What's a decent way of responding? What should I keep in mind? What mindset should I have? It should be added that the user seems to be a 24 year old CS student, and our product is an Android app that we're giving away free of charge.
What's a decent way of responding? Thank them for the report. Reassure them you are listening to their feedback. What should I keep in mind? That you can't please everyone and that some people don't seem capable of not being rude. What mindset should I have? You don't have to follow through on all the points that were brought up. It is your app and you decide where it goes. You will not be able to please everyone - so don't try. Make sure the group of people your app is for is catered for, but not everyone that uses it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/273962", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26416/" ] }
274,101
I'm learning Scheme from the SICP and I'm getting the impression that a big part of what makes Scheme and, even more so, LISP special is the macro system. But, since macros are expanded at compile-time, why don't people make equivalent macro systems for C/Python/Java/whatever? For example, one could bind the python command to expand-macros | python or whatever. The code would still be portable to people who don't use the macro system, one would just expand the macros before publishing code. But I don't know of anything like that except templates in C++/Haskell, which I gather aren't really the same. What about LISP, if anything, makes it easier to implement macro systems?
Many Lispers will tell you that what makes Lisp special is homoiconicity , which means that the code's syntax is represented using the same data structures as other data. For example, here's a simple function (using Scheme syntax) for calculating the hypotenuse of a right-angled triangle with the given side lengths: (define (hypot x y) (sqrt (+ (square x) (square y)))) Now, homoiconicity says that the code above is actually representable as a data structure (specifically, lists of lists) in Lisp code. Thus, consider the following lists and see how they "glue" together: (define #2# #3#) (hypot x y) (sqrt #4#) (+ #5# #6#) (square x) (square y) Macros allow you to treat the source code as just that: lists of stuff. Each of those 6 "sublists" contain either pointers to other lists, or to symbols (in this example: define , hypot , x , y , sqrt , + , square ). So, how can we use homoiconicity to "pick apart" syntax and make macros? Here's a simple example. Let's reimplement the let macro, which we'll call my-let . As a reminder, (my-let ((foo 1) (bar 2)) (+ foo bar)) should expand into ((lambda (foo bar) (+ foo bar)) 1 2) Here's an implementation using Scheme "explicit renaming" macros † : (define-syntax my-let (er-macro-transformer (lambda (form rename compare) (define bindings (cadr form)) (define body (cddr form)) `((,(rename 'lambda) ,(map car bindings) ,@body) ,@(map cadr bindings))))) The form parameter is bound to the actual form, so for our example, it would be (my-let ((foo 1) (bar 2)) (+ foo bar)) . So, let's work through the example: First, we retrieve the bindings from the form. cadr grabs the ((foo 1) (bar 2)) portion of the form. Then, we retrieve the body from the form. cddr grabs the ((+ foo bar)) portion of the form. (Note that this is intended to grab all the subforms after the binding; so if the form were (my-let ((foo 1) (bar 2)) (debug foo) (debug bar) (+ foo bar)) then the body would be ((debug foo) (debug bar) (+ foo bar)) .) Now, we actually build the resultant lambda expression and call using the bindings and body we have collected. The backtick is called a "quasiquote", which means to treat everything inside the quasiquote as literal datums, except the bits after the commas ("unquote"). The (rename 'lambda) means to use the lambda binding in force when this macro is defined , rather than whatever lambda binding might be around when this macro is used . (This is known as hygiene .) (map car bindings) returns (foo bar) : the first datum in each of the bindings. (map cadr bindings) returns (1 2) : the second datum in each of the bindings. ,@ does "splicing", which is used for expressions that return a list: it causes the list's elements to be pasted into the result, rather than the list itself. Putting all that together, we get, as a result, the list (($lambda (foo bar) (+ foo bar)) 1 2) , where $lambda here refers to the renamed lambda . Straightforward, right? ;-) (If it's not straightforward for you, just imagine how difficult it would be to implement a macro system for other languages.) So, you can have macro systems for other languages, if you have a way to be able to "pick apart" source code in a non-clunky way. There are some attempts at this. For example, sweet.js does this for JavaScript. † For seasoned Schemers reading this, I intentionally chose to use explicit renaming macros as a middle compromise between defmacro s used by other Lisp dialects, and syntax-rules (which would be the standard way to implement such a macro in Scheme). I don't want to write in other Lisp dialects, but I don't want to alienate non-Schemers who aren't used to syntax-rules . For reference, here's the my-let macro that uses syntax-rules : (define-syntax my-let (syntax-rules () ((my-let ((id val) ...) body ...) ((lambda (id ...) body ...) val ...)))) The corresponding syntax-case version looks very similar: (define-syntax my-let (lambda (stx) (syntax-case stx () ((_ ((id val) ...) body ...) #'((lambda (id ...) body ...) val ...))))) The difference between the two is that everything in syntax-rules has an implicit #' applied, so you can only have pattern/template pairs in syntax-rules , hence it's fully declarative. In contrast, in syntax-case , the bit after the pattern is actual code that, in the end, has to return a syntax object ( #'(...) ), but can contain other code too.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274101", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/168958/" ] }
274,203
For clarity, a deadline is: A time limit or deadline is a narrow field of time, or particular point in time, by which an objective or task must be accomplished. From wikipedia My whole software development career I have been doing "Agile" which everywhere has seemed to mean at least the following practices were adhered to: Weekly or Bi-Weekly sprints Retrospectives Sprint planning A product owner Scrum User Stories However, every project I have ever been on has insisted on setting a deadline. Given that Agile attempts to focus on adaptive planning, flexibility and change; are deadlines Agile? My own opinion is that they are not, as I see deadlines leading to a lack of flexibility and lack of quality. Instead, I think it provides more value to focus on Sprints and early deliveries. It seems however, in every circle I have been in, this is not the case, and deadlines are viewed to go hand in hand with Agile development.
Deadlines are a reality. Most times you have to have something by a certain date. It's unavoidable. Without deadlines, even agile projects can succumb to Parkinson's Law : Work expands so as to fill the time available for its completion. In other words, if your project can go on forever, it will. In relation to deadlines, Agile tries to do a few things: Ensure that everybody can always see how much work will get done by the deadline Ensure that the most important features are completed first Ensure that the completed features are usable, in the sense that they don't depend on features that haven't yet been completed Ensure that development continues at a sustainable pace That way, when the inevitable day comes, you don't have a useless pile of code, but a working, tested product with, hopefully, only the least important stuff unfinished. And nobody is surprised by the finished product. So yes. "Agile" and "deadlines" can be perfectly compatible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274203", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13364/" ] }
274,231
The original binary search algorithm in the JDK used 32-bit integers and had an overflow bug if (low + high) > INT_MAX ( http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html ). If we rewrote the same binary search algorithm using (signed) 64-bit integers, can we assume that low + high will never exceed INT64_MAX because it's physically impossible to have 10^18 bytes of memory? When using (signed) 64-bit integers to represent physical quantities , is it reasonable to assume underflow and overflow can't happen?
The short answer is no. However, for some applications your assumption might be correct. Assuming a signed int, 2^63, with commas added for some clarity, = 9,223,372,036,854,775,808. So it's roughly 9 * 10^18. 10^18 is an "Exa". Wikipedia says "As of 2013, the World Wide Web is estimated to have reached 4 zettabytes.[12]", which is 4000 Exabytes. Therefore, the WWW is roughly 400 times larger than 2^63 bytes. Therefore, there is at least one physical quantity that is much larger than a signed (or unsigned) 64 bit integer. Assuming that your units are bytes . If your units were something much larger, like GigaBytes, then you'd be o.k., but your precision of measurement would be low. For another example, consider far away galaxies. The Andromeda Galaxy is actually one of the close ones, and it is 2.5 * 10^6 light years away. If your units were miles , that would be 14.5 * 10^18, more than a 64 bit signed integer. Now, obviously it depends on the units you use for your measurements, but some galaxies are way further away than Andromeda. ( The furthest known one is 13 * 10^9 L.Y. away. ) Depending on the precision you want for your measurement, it could overflow a 64 bit integer. ( Added ) Yes, miles are a lousy unit for astronomical distance. A more normal unit might be an Astronomical Unit , roughly 93 million miles. Using that unit of measurement, the furthest known galaxy is roughly 10^15 A.U. (if my math is right), which would fit into a 64 bit int. However, if you wanted to also measure the distance to the Moon, to nearby orbiting satellites, that unit is too large. One more example from electronics: the Farad (F), a unit of capacitance . Large capacitors range up to 5kF. And this number will likely increase over time as Hybrid cars, "smart grids", etc. improve. Once can measure capacitance as small as 10^-18 F. So the overall range in "real" capacitance that we can measure today is 5*10^21, larger than a 64 bit integer.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274231", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/142355/" ] }
274,278
A colleague of mine today suggested that we go through all of the queries in our application and to add indices accordingly. I feel this is premature optimisation because our application is not even released yet. I suggested to monitor for slow queries once we go live and then add indices accordingly. What is the general consensus when designing your database, should you add a matching index every time you write a new query? Or is it better to just monitor and see how it goes?
Premature optimization is "optimizing" something because of a vague, intuitive sense that, y'know, this will probably be slow, especially to the detriment of code readability and maintainability . It doesn't mean willfully not following well-established good practices regarding performance. Sometimes that's a difficult line to draw, but I'd definitely say that not adding any indices before you go live is too-late optimization ; this will punish early adopters--your most eager and most important users--and give them a negative view of your product, which they will then spread around in reviews, discussions, etc. Monitoring queries to find pain points that need indexing is a good idea, but I'd make sure to do that no later than the beta.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274278", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/146960/" ] }
274,342
Babel's guide to ES6 says: let is the new var . Apparently the only difference is that var gets scoped to the current function , while let gets scoped to the current block . There are some good examples in this answer . I can't see any reason to use var in ES6 code. Even if you want to scope a given variable to the whole function, you can do so with let by putting the declaration at the top of the function block, which is what you should be doing with var anyway to indicate the actual scope. And if you want to scope something more finely in an for block or something, then you can do that too. So my instinct is to stop using var altogether when writing ES6 code. My question is, am I wrong about this? Is there any legitimate case where var would be preferable over let ?
Doug Crockford discusses let at this point in his talk, " The Better Parts ". The point is, let avoids a source of misunderstanding, esp. for programmers with expectations set by languages with block-scope. A var has function scope (it declares a variable that's visible throughout the function) even though it looks like it has block scope . var might possibly still be useful in an extreme case like machine-generated code, but I'm stretching hard there. ( const is also new and has block scope. After let x = {'hi': 'SE'} you can reassign to x , while after const y = x you cannot reassign to y . That's often preferrable since it keeps something from accidentally changing out from under you. But to be clear, you can still modify the object y.hi = 'SO' unless you freeze it.) Realistically, your impression is right on for ES6: Adopt let and const . Stop using var . (In another performance of "The Better Parts" , Doug says why === was added rather than fixing the problems of == . == produces some "surprising" results, so just adopt === .) A Revealing Example Mozilla Developer Network gives an example where var does not work as intended. Their example is a realistic one that sets onclick handlers in a web page. Here's a smaller test case: var a = []; (function () { 'use strict'; for (let i = 0; i < 5; ++i) { // *** `let` works as expected *** a.push( function() {return i;} ); } } ()); console.log(a.map( function(f) {return f();} )); // prints [0, 1, 2, 3, 4] // Start over, but change `let` to `var`. // prints [5, 5, 5, 5, 5] var trips us up because all loop iterations share the same function-scoped i variable, which has the value 5 after the loop finishes. Another Telling Example function f(x) { let y = 1; if (x > 0) { let y = 2; // `let` declares a variable in this block } return y; } [f(1), f(-1)] // --> [1, 1] // Start over, but change `let` to `var`. // --> [2, 1] let declares block-scoped variables. var confuses us by referring to the same variable throughout the function.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274342", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30385/" ] }
274,346
I am working at a medium sized internet company, and I was handed the task of implementing the framework for our sites rebranding. It's quite an old (and ugly) site, and we want to update its look and feel (UI/UX) as fast as possible, but also with a good foundation for the future. I started to check out Zurb's Foundation and Twitter's Bootstrap, and I was impressed with the features they hold (like responsiveness). My only reservations were that we have designers made a design that will be impossible to create without massive customization, and the fact that we have some important (and ancient) HTML and JS that will be prohibitively difficult to remake with a ready made framework. So I was wondering if I should make a basic lean framework myself, that over time we can add features to, without starting to override designs and code structures that were made for similar, but different things.
Doug Crockford discusses let at this point in his talk, " The Better Parts ". The point is, let avoids a source of misunderstanding, esp. for programmers with expectations set by languages with block-scope. A var has function scope (it declares a variable that's visible throughout the function) even though it looks like it has block scope . var might possibly still be useful in an extreme case like machine-generated code, but I'm stretching hard there. ( const is also new and has block scope. After let x = {'hi': 'SE'} you can reassign to x , while after const y = x you cannot reassign to y . That's often preferrable since it keeps something from accidentally changing out from under you. But to be clear, you can still modify the object y.hi = 'SO' unless you freeze it.) Realistically, your impression is right on for ES6: Adopt let and const . Stop using var . (In another performance of "The Better Parts" , Doug says why === was added rather than fixing the problems of == . == produces some "surprising" results, so just adopt === .) A Revealing Example Mozilla Developer Network gives an example where var does not work as intended. Their example is a realistic one that sets onclick handlers in a web page. Here's a smaller test case: var a = []; (function () { 'use strict'; for (let i = 0; i < 5; ++i) { // *** `let` works as expected *** a.push( function() {return i;} ); } } ()); console.log(a.map( function(f) {return f();} )); // prints [0, 1, 2, 3, 4] // Start over, but change `let` to `var`. // prints [5, 5, 5, 5, 5] var trips us up because all loop iterations share the same function-scoped i variable, which has the value 5 after the loop finishes. Another Telling Example function f(x) { let y = 1; if (x > 0) { let y = 2; // `let` declares a variable in this block } return y; } [f(1), f(-1)] // --> [1, 1] // Start over, but change `let` to `var`. // --> [2, 1] let declares block-scoped variables. var confuses us by referring to the same variable throughout the function.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274346", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/169215/" ] }
274,459
I am currently trying to figure out SOLID. So the Dependency Inversion Principle means that any two classes should communicate via interfaces, not directly. Example: If class A has a method, that expects a pointer to an object of type class B , then this method should actually expect an object of type abstract base class of B . This is helps for Open/Close as well. Provided that I understood that correctly, my question would be is it a good practice to apply this to all class interactions or should I try to think in terms of layers ? The reason I am skeptical is because we are paying some price for following this principle. Say, I need to implement feature Z . After analysis, I conclude that feature Z consists of functionality A , B and C . I create a facade class Z , that, through interfaces, uses classes A , B and C . I begin coding the implementation and at some point I realize that task Z actually consists of functionality A , B and D . Now I need to scrap the C interface, the C class prototype and write separate D interface and class. Without interfaces, only the class would've needed to be replaced. In other words, to change something, I need to change 1. the caller 2. the interface 3. the declaration 4. the implementation. In a python directly coupled implementation, I would need to change only the implementation.
In many cartoons or other media, the forces of good and evil are often illustrated by an angel and a demon sitting on the character's shoulders. In our story here, instead of good and evil, we have SOLID on one shoulder, and YAGNI (You ain't gonna need it!) sitting on the other. SOLID principles taken to the max are best suited for huge, complex, ultra-configurable enterprisey systems. For smaller, or more specific systems, it is not appropriate to make everything ridiculously flexible, as the time you spend abstracting things will not prove to be a benefit. Passing interfaces instead of concrete classes sometimes means for example that you can easily swap reading from a file for a a network stream. However, for a great amount of software projects, that kind of flexibility is just not ever going to be needed, and you might as well just pass concrete file classes and call it a day and spare your brain cells. Part of the art of software development is having a good sense of what is likely to change as time goes on, and what isn't. For the stuff that is likely to change, use the interfaces and other SOLID concepts. For the stuff that won't, use YAGNI and just pass concrete types, forget the factory classes, forget all the runtime hooking up and configuration, etc, and forget a lot of the SOLID abstractions. In my experience, the YAGNI approach has proven to be correct far more often than it is not.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274459", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54268/" ] }
274,473
In one of the projects I'm working on the following pattern is seen on a fairly regular basis: var guid = Guid.NewGuid().ToString(); while (guid == Guid.Empty.ToString()) { guid = Guid.NewGuid().ToString(); } While I understand that a a GUID is not guaranteed to be unique and as per the MSDN documentation a generated GUID may be zero , is this a practical consideration actually worth sending cycles testing for both in the computational sense and in terms of developer time thinking about it?
I would suggest it's not worth checking for Guid.Empty. The docs for Guid.NewGuid for some reason mention that The chance that the value of the new Guid will be all zeros or equal to any other Guid is very low. Guid.NewGuid is a wrapper for the Win32 API CoCreateGuid , which makes no mention of returning all zeroes. Raymond Chen goes further , suggesting that no valid implementation of Co­Create­Guid can generate GUID_NULL So, no, I wouldn't worry about it. I wont' guess as to why the Guid.NewGuid docs even mention it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274473", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2471/" ] }
274,562
I am trying to understand BDD. I've read some articles and as I understood BDD is "the next step" from TDD. I say that because I find both to be very similar, and as I could read in this article , BDD was born as an improvement from TDD. Great, I really like the idea. There is one practical point that I do not get, thought: there is a .feature file in which the BA will write all the expected behavior in which the system would have. As a BA, he has no idea how the system is build, so we will write something like this: +Scenario 1: Account is in credit+ Given the account is in credit And the card is valid And the dispenser contains cash When the customer requests cash Then ensure the account is debited And ensure cash is dispensed And ensure the card is returned Ok, this is great, but there are many parts of the system that will collaborate so that it can happen (think of Account obj, Dispenser obj, Customer obj and so on). To me this looks like an integration test. I would like to have Unit Tests. How do I test the code that checks if the dispenser has money? Or that the cash is dispensed? Or that the account is debited when required? How can I mix unit tests with "BA Created" tests?
Behavior Driven Development and Test Driven Development are complimentary, but not a replacement for each other. How the application "behaves" is described in Acceptance Tests, which according to BDD would be the Features and Scenarios written in Cucumber. The nitty gritty details of how each small component works are described in Unit Tests. The outcomes of the Unit Tests support the Scenarios you write in Cucumber. Imagine the process for building a car. First, the product team comes up with their ideas, and eventually boil them down to usage scenarios: Scenario: Starting the car Given I am standing in front of the drivers door When I open the door Then the door should lift up DeLorean-style (yeah, baby!) When I get into the car And turn the key Then the engine roars to life I know this scenario sounds a bit silly, but it is a very high level, product and end user focused requirement. Just opening the door, turning the key and starting the engine involves a LOT of different components working together. This one test is not enough to make sure the vehicle works properly. You need to test the starter, the battery, the alternator, key, ignition switch --- and the list goes on --- just to get into the car and start it. Each of those components need their own tests. The scenario above is a "Big Picture" test. Each component of the vehicle needs "Small Picture" tests to make sure they function properly within the whole. Building and testing software is the same in many respects. You design from top down, then build from bottom up. Why have a door that lifts up if you can't even start the engine? Why have a starter if you have no battery? Your product team will come up with the Acceptance Tests and flesh them out in Cucumber. This gives you the "Big Picture". Now it's up to the engineering team to design the proper components, and how they interact, then test each one separately --- these are your Unit Tests. Once the Unit Tests are passing, start implementing the Cucumber scenarios. Once those are passing, you have delivered what the product team has asked.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274562", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7764/" ] }
274,587
I was having a discussion with a co-worker, and we ended up having conflicting intuitions about the purpose of subclassing. My intuition is that if a primary function of a subclass is to express a limited range of possible values of its parent, then it probably shouldn't be a subclass. He argued for the opposite intuition: that subclassing represents an object's being more "specific", and therefore a subclass relationship is more appropriate. To put my intuition more concretely, I think that if I have a subclass that extends a parent class but the only code that subclass overrides is a constructor (yes, I know constructors don't generally "override", bear with me), then what was really needed was a helper method. For example, consider this somewhat real-life class: public class DataHelperBuilder { public string DatabaseEngine { get; set; } public string ConnectionString { get; set; } public DataHelperBuilder(string databaseEngine, string connectionString) { DatabaseEngine = databaseEngine; ConnectionString = connectionString; } // Other optional "DataHelper" configuration settings omitted public DataHelper CreateDataHelper() { Type dataHelperType = DatabaseEngineTypeHelper.GetType(DatabaseEngine); DataHelper dh = (DataHelper)Activator.CreateInstance(dataHelperType); dh.SetConnectionString(ConnectionString); // Omitted some code that applies decorators to the returned object // based on omitted configuration settings return dh; } } His claim is that it would be entirely appropriate to have a subclass like this: public class SystemDataHelperBuilder { public SystemDataHelperBuilder() : base(Configuration.GetSystemDatabaseEngine(), Configuration.GetSystemConnectionString()) { } } So, the question: Among people who talk about design patterns, which of these intuitions is correct? Is subclassing as described above an anti-pattern? If it is an anti-pattern, what is its name? I apologize if this turns out to have been an easily googleable answer; my searches on google mostly returned information about the telescoping constructor anti-pattern and not really what I was looking for.
If all you want to do is create class X with certain arguments, subclassing is an odd way of expressing that intent, because you aren't using any of the features that classes and inheritance give you. It's not really an anti-pattern, it's just strange and a bit pointless (unless you have some other reasons for it). A more natural way of expressing this intent would be a Factory Method , which in this case is a fancy name for your "helper method." Regarding the general intuition, both "more specific" and "a limited range" are potentially harmful ways of thinking about subclasses, because they both imply that making Square a subclass of Rectangle is a good idea. Without relying on something formal like LSP, I would say a better intuition is that a subclass either provides an implementation of the base interface, or extends the interface to add some new functionality.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274587", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/129508/" ] }
274,593
While programming in C#, I stumbled upon a strange language design decision that I just can't understand. So, C# (and the CLR) has two aggregate data types: struct (value-type, stored on the stack, no inheritance) and class (reference-type, stored on the heap, has inheritance). This setup sounds nice at first, but then you stumble upon a method taking a parameter of an aggregate type, and to figure out if it is actually of a value type or of a reference type, you have to find its type's declaration. It can get really confusing at times. The generally accepted solution to the problem seems to be declaring all struct s as "immutable" (setting their fields to readonly ) to prevent possible mistakes, limiting struct s' usefulness. C++, for example, employs a much more usable model: it allows you to create an object instance either on the stack or on the heap and pass it by value or by reference (or by pointer). I keep hearing that C# was inspired by C++, and I just can't understand why didn't it take on this one technique. Combining class and struct into one construct with two different allocation options (heap and stack) and passing them around as values or (explicitly) as references via the ref and out keywords seems like a nice thing. The question is, why did class and struct become separate concepts in C# and the CLR instead of one aggregate type with two allocation options?
The reason C# (and Java and essentially every other OO language developed after C++) did not copy C++'s model in this aspect is because the way C++ does it is a horrendous mess. You correctly identified the relevant points above: struct : value type, no inheritance. class : reference type, has inheritance. Inheritance and value types (or more specifically, polymorphism and pass-by-value) don't mix; if you pass an object of type Derived to a method argument of type Base , and then call a virtual method on it, the only way to get proper behavior is to ensure that what got passed was a reference. Between that and all the other messes that you run into in C++ by having inheritable objects as value types (copy constructors and object slicing come to mind!) the best solution is to Just Say No. Good language design isn't just implementing features, it's also knowing what features not to implement, and one of the best ways to do this is by learning from the mistakes of those who came before you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274593", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/168946/" ] }
274,732
If a coder writes code that nobody other than he can understand, and code reviews always end with the reviewer scratching his head or holding their head in their hands, is this a clear sign that the coder is simply not cut-out for professional programming? Would this be enough to warrant a career change? How important is comprehensible code in this industry? Consider these C examples, compare: if (b) return 42; else return 7; versus: return (b * 42) | (~(b - 1) * 7); Is there ever an excuse to employ the latter example? if so, when and why? EDIT: Leaving the original latter snippet for consideration, adding a correction: return (b * 42) | ((b - 1) & 7); I think the other interesting observation is that it requires that b is 1 for true and 0 for false, any other values would render strange results.
The first rule of any professional software engineer is to write code that is comprehensible. The second example looks like an optimized example for an older, non-optimizing compiler or just someone who happens to want to express themselves with bitwise operators. It's pretty clear what's going on if we are familiar with bitwise operations but unless you're in a domain where that's the case, avoid the second example. I should also point out that the curly braces are missing in the first example. The coder may insist that his code is efficient, that may be the case but if it can't be maintained, it might as well not be written at all, since it will generate horrendous technical debt down the road.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274732", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/153817/" ] }
274,757
Am I correct to assume that most end users are using an older version than Java 8? Since I do not want to force people to upgrade in order to use my application, should I plan it to use Java 7 or even 6 from the start, even if that means that I can't apply the benefits of the newer versions for myself as a developer?
Relying on an installed JRE to be correct doesn't make sense outside of a controlled corporate environment where all the desktops are locked to a specific version. In which case, you should ask this question of the person who controls that environment. For a mass-market Java desktop application, you should use an installer or launcher that bundles the JRE you want them to be using, or set up Java Web Start (JAWS). Note that if you actually physically distribute a bundled JRE, you have to comply with the license terms . I am not a lawyer, but those shouldn't be problematic for most purposes. If you are in a situation where you have a legal team, you should of course run it past them. For a developer or other technically-oriented tool, it is generally preferable to publish the jars on Maven Central , so distribution and download is entirely automated. This is one case where sticking to older Java versions is something of an advantage, as it enables their use in corporations locked down to an older version. But I wouldn't worry about that too much for a project started today. Finally, if all the above is too much work, you can just publish the source on github or bitbucket and let the user build it themselves.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274757", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/169664/" ] }
274,810
I have some raw data I need to do many things to (shift it, rotate it, scale it along certain axis, rotate it to a final position) and I am not sure what the best way to do this to maintain code readability. On one hand, I can make a single method with many parameters (10+) to do what I need, but this is a code reading nightmare. On the other hand, I could make multiple methods with 1-3 parameters each, but these methods would need to be called in a very specific order to get the correct result. I have read that it is best for methods to do one thing and do it well, but it seems like having many methods that need to be called in order opens up code for hard-to-find bugs. Is there a programming paradigm I could use that would minimize bugs and make code easier to read?
Beware of temporal coupling . However, this is not always an issue. If you must perform steps in order, it follows that step 1 produces some object required for step 2 (e.g. a file stream or other data structure). This alone requires that the second function must be called after the first, it is not even possible to call them in the wrong order accidentally. By splitting your functionality up into bite-sized pieces, each part is easier to understand, and definitely easier to test in isolation. If you have a huge 100 line function and something in the middle breaks, how does your failed test tell you what is wrong? If one of your five line methods breaks, your failed unit test directs you immediately toward the one piece of code that needs attention. This is how complex code should look: public List<Widget> process(File file) throws IOException { try (BufferedReader in = new BufferedReader(new FileReader(file))) { List<Widget> widgets = new LinkedList<>(); String line; while ((line = in.readLine()) != null) { if (isApplicable(line)) { // Filter blank lines, comments, etc. Ore o = preprocess(line); Ingot i = smelt(o); Alloy a = combine(i, new Nonmetal('C')); Widget w = smith(a); widgets.add(w); } } return widgets; } } At any point during the process of converting raw data into a finished widget, each function returns something required by the next step in the process. One cannot form an alloy from slag, one must smelt (purify) it first. One may not create a widget without the proper allow (e.g. steel) as input. The specific details of each step are contained in individual functions that can be tested: rather than unit testing the entire process of mining rocks and creating widgets, test each specific step. Now you have an easy way of ensuring that if your "create widget" process fails, you can narrow down the specific reason. Aside from the benefits of testing and proving correctness, writing code this way is far easier to read. Nobody can understand a huge parameter list . Break it down into small pieces, and show what each little piece means: that is grokkable .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274810", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/169709/" ] }
274,988
I was reading the popular answer about Branch Prediction from https://stackoverflow.com/q/11227809/555690 , and there is something confusing me: If you guessed right, it continues on. If you guessed wrong, the captain will stop, back up, and yell at you to flip the switch. Then it can restart down the other path. If you guess right every time, the train will never have to stop. If you guess wrong too often, the train will spend a lot of time stopping, backing up, and restarting. But this is what I don't get: to know whether your guess was right or wrong, you have to make a condition check anyway . So how does branch prediction even work, if either way you are still doing the same conditional check? What I am trying to say is, isn't branch prediction exactly the same as having no branch prediction at all because you are doing the same conditional checks anyway? (obviously I'm wrong, but I don't get it)
Of course the condition is checked every single time. You cannot avoid this. Branch prediction and many other tricks that modern CPUs do are all about achieving as much processing as possible in parallel. One of the main features that accomplishes things in parallel is the CPU pipeline. When the pipeline is full, you get the most benefits. When the pipeline is not full, performance suffers. Usually, a condition is immediately followed by a conditional branch instruction, which will either branch or not branch, depending on the result of the condition. This means that there are two different streams of instructions that the CPU might need to follow. Unfortunately, immediately after loading the condition instruction and the branch instruction, the CPU does not know yet what the condition will evaluate to, but it still has to keep loading stuff into the pipeline. So it picks one of the two sets of instructions based on a guess ("prediction") as to what the condition will evaluate to. Later on, the CPU finds out whether its guess was right or wrong. If the guess turns out to be right, then the branch went to the correct place, and the right instructions were loaded into the pipeline, so all is good. If it turns out that the guess was wrong, then all the instructions that were loaded into the pipeline after the conditional branch instruction were wrong, they need to be discarded, and fetching of instructions must commence again from the right place. Amendment In response to StarWeaver's comment, to give an idea of what the CPU has to do in order to execute a single instruction: Consider something as simple as MOV AX,[SI+10] which we humans naïvely think of as "load AX with the word at SI plus 10". Roughly, the CPU has to: emit the contents of PC (the "program counter register") to the address bus; read the instruction opcode from the data bus; increment PC; decode the opcode to discover that it is supposed to be followed by an operand; emit the contents of PC to the address bus; read the operand (in this case 10) from the data bus; increment PC; feed the operand and SI to the adder; emit the result of the adder to the address bus; read AX from the data bus. This is a whopping 10 steps. Some of these steps will be optimized away even in non-pipelined CPUs, for example the CPU will almost always increment PC in parallel with the next step, which is an easy thing to do because the PC is a very, very special register which is never used for any other job, so there is no possibility of contention between different parts of the CPU for access to this particular register. But still, we are left with 8 steps for such a simple instruction, and note that I am already assuming some degree of sophistication on behalf of the CPU, for example I am assuming that there will be no need for a whole extra step for the adder to actually carry out the addition before the result can be read from it, and I am assuming that the output of the adder can be sent directly to the address bus without having to be stored in some intermediate internal addressing register. Now, consider that there exist more complicated addressing modes, like MOV AX, [DX+SI*4+10] , and even far more complicated instructions, like MUL AX, operand which actually perform loops inside the CPU to calculate their result. So, my point here is that the "atomic level" metaphor is far from suitable for the CPU instruction level. It might be suitable for the pipeline step level, if you don't want to go too far down to the actual logic gate level.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/274988", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13833/" ] }
275,142
What is the moral responsibility of releasing open source software too soon? For instance, a close-to-complete product that hasn't been fully tested. What is the expectation of the programmer? Wait until it is fully tested, or release to open source and then continue further development, testing, and advancements? The fear is that software is open sourced and could potentially lead to issues for consumers. Is this an unfounded fear?
I believe on the contrary that you should release an open source software as soon as possible. There is no "too soon" for that (but it should compile). Or at least publish the source code very early and continuously (e.g. by frequent pushes on github ), without making formal releases. However, it is very important to flag it as alpha or beta stage, and if possible to say (e.g. in a README or TODO file, and on some blog, etc...) what is missing, not tested, or in bad shape. You should also use the version number to convey such information. With free software , the best that should happen is that someone glances into the source code and propose you a small patch improving it. This is why you make your software free! Hence, you need to make visible your daily work on your free software! External contributors would be pissed off if their patch won't work with, or is a duplicate of, your recent software source code. What you should be afraid of is nobody getting interested by your software (and contributing to it). Attracting outside interest to a free software (in particular, attracting external contributors) is a long journey.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275142", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/123656/" ] }
275,161
Background Last year, I was asked to create a tool to be used for business planning for around 10 users. This was done on behalf of another IT team who "sub-contracted" the work to me, and due to the project deadlines being a little unplanned on their side, I had to implement it in a bit of a rush. At the time, we decided that the quickest way would be to create an Excel workbook with VBA and then have the users download this VBA-enhanced workbook from an Intranet to use on their PCs. Excel was a constraint in this case because the planning system (i.e. database) we use can only interact via an Excel add-in which must be loaded at the same time the planning workbook is open. However, VBA was not a constraint at that time. The workbook I created around 4,000 lines of VBA code and whilst I tried to separate data and presentation layers, I couldn't in all cases due to the project deadlines. To be honest, whilst I am proud of creating this workbook, I am at the same time a little disappointed in that it could've been done better, in both terms of coding and also deployment to the users. Today Back to today and the IT team has again come to me to request a similar workbook (so I could reuse parts of the other workbook above), but this time it is a lot more complicated and will be used by a greater number of users (around 200). However, this time, it is a little better planned and I can see that we have a bit more time to plan things. Based on this, I thought about the solution and infrastructure as programming for 100 users has more of an impact than for 10 users. Therefore, I suggested to the team that perhaps we should consider migrating the existing code to a C# solution so that we could manage the code in a more refined way. I'm still considering it as an add-in written using VSTO / Excel-DNA which can then be deployed to the users. I discussed this with the IT team two weeks ago and everything seemed to be fine, until yesterday I received a mail from one of the team (who does not know VBA or C#) questioning why we should start this new project in C# versus using the same approach as before. Some of their concerns were: It is a fairly important project so it has to work - a C# solution would not be as stable or work as well as the existing a VBA-based solution. We would have to throw away what we [I] had done in the VBA solution and recreate it from scratch in C#. Someone will have to support two separate solutions, one in VBA and one in C#. [actually, they currently do not have anyone for support, I usually step in]. Now, I can understand some of their concerns to some degree, but I need to come to a decision on next steps and what to go back to them with. Personally, I would like to implement in C# because I feel it would lend itself better to building an "Enterprise" solution like this. Furthermore, I would like to take this opportunity brush up on my C# skills as I am currently not as competent in C# as I am VBA and I'd like a project like this to take me to the "next level". I prepared a list of points that I could use to try and convince them that a C# solution would be better for this project, this is what I have so far: Unit testing. Source control. Code documentation - for knowledge transfer to other support persons. Better coding conventions - can use things like ReSharper to enforce better naming and structure. Better IDE - fewer mistakes due to error highlighting. More modularity through assemblies - can promote re-use in future tools. Managed deployment - can control who this tool is used by. Question: What other points could I add to convince them? Or am I trying to bite off more than I can chew with this project? Should I just keep quiet and do it in VBA anyway? I am aware that just moving to a new language because its "newer" or seen to be "cooler" should not be a basis for a decision and as such I have resisted to include it as a decision point - this is about facts. Also, I am not asking for a literal comparison between C# and VBA as languages, as there are plenty of comparisons on SO.
The three points you listed seem fair: It is a fairly important project so it has to work - a C# solution would not be as stable or work as well as the existing a VBA-based solution. Indeed, later, you tell: "I would like to take this opportunity brush up on my C# skills as I am currently not as competent in C# as I am VBA " (emphasis mine). In other words, you have a solution which works and went through intensive user testing. You want to throw all this and rewrite everything in a language you don't know well. See the problem? We would have to throw away what we [I] had done in the VBA solution and recreate it from scratch in C#. Things You Should Never Do comes to mind. You are throwing the code, as well as the user testing. Not a good thing. Someone will have to support two separate solutions, one in VBA and one in C#. [actually, they currently do not have anyone for support, I usually step in]. If the VBA version would still be used, the rewrite is indeed even more problematic. Why would you have two disparate systems which require your attention, when you may have only one which already works and which you can refactor and add features to? Some of your points, on the other hand, can be criticized: Unit testing. You can unit test your current project as well. If there is no convenient framework for that, create one. Source control. Source control deals with text. Your current code is text, therefore you can use source control for it. The language, the operating system, the framework or the ecosystem are completely irrelevant. You can (and should) use source control for any code you write: code in Visual Studio, or a piece of code you draft in a few minutes in LINQPad, or PowerShell scripts which automate a task, or database schema , or Excel macros. Code documentation - for knowledge transfer to other support persons. Agreed. Better coding conventions - can use things like ReSharper to enforce better naming and structure. Define "better". Are there coding conventions for Excel's macros? If yes, use them: they are not better or worse than any other. If not, create ones and publish them so that other people can use them too. The answers to a question posted in 2010 seem rather disappointing, but there may be new tools available since then. Note that the important part of coding conventions is that they should be enforced on commit. Better IDE - fewer mistakes due to error highlighting. Agreed. The fact that we can't write macros in Visual Studio is very unfortunate. More modularity through assemblies - can promote re-use in future tools. I'm pretty sure your current product can use some degree of modularity as well. Managed deployment - can control who this tool is used by. Agreed. Instead of a complete rewrite, you might search for a way to progressively move code from the macro to an ordinary assembly written in VB.NET. Why in VB.NET? Three reasons: There is less difference between VBA and VB.NET as there is between VBA and C#. You know VBA better, and this alone is a good reason to use VB.NET instead of C#. If you want to "brush up on your C# skills", do it with your personal projects, not business critical stuff. Any rewrite from a language to another leads to potential bugs. You don't need that for this project. Moving to a .NET assembly can give you the convenient environment of Visual Studio, with the convenient unit testing, TFS and error highlighting you currently use in other projects. At the same time, if you move your code step by step, you don't take the risk of a complete rewrite (i.e. spending four months creating something nobody wants to use because the high number of new bugs). For instance, you need to work on a specific feature? Think how you can move this particular feature to .NET first. This is quite similar to refactoring. Instead of rewriting the whole thing because you learnt some new design patterns and language features, you simply do small changes on the code on which you work right now.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275161", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/98841/" ] }
275,231
I am in the process of writing a requirements spec, and I have a dilemma in phrasing a piece of the requirements. Scenario: We download files from a website and the downloaded files need to be attached to an item in the CM tool we have. The downloaded files contain names which can be ASCII, ISO-8859-1, Japanese, etc. In the phrasing below, does "non-ASCII" cover all situations? The downloaded file name may contain non-ASCII characters and processing of this shall not crash the application
The requirement, as stated, is fuzzy to me. The first question I would have is: how many character encodings need to be supported? Possible interpretations include: Every encoding ever devised, including single-byte (e.g. ISO-8859-15 ), multibyte (e.g. Big5 , Shift-JIS , HZ ), and rare/weird ones (e.g. UTF-7 , Punycode , EBCDIC ). That's obviously extreme. How about just the minimum support, namely ISO-8859-1? Just ISO-8859-1 seems weaselly. How about just supporting modern best practices, namely Unicode as UTF-8 ? If you don't specify which encodings you mean, then when an encoding-specific bug occurs, you and the implementor could have a fight and you'd both be right. That is, by definition, the consequence of a fuzzy spec. Going further, what does the software need to do with the filename, besides not crashing? Should it… Preserve the filename in its original encoding, byte-for-byte? Normalize everything to Unicode? If so, does it need to auto-detect the source encoding? By what mechanism? Store both the Unicode form and the original, just in case the normalization fails? A better version of your requirement would be The downloader must support filenames in various encodings, including at least ASCII, ISO-8859-1, ISO-8859-15, KOI8-R, UTF-8, Shift-JIS, EUC-JP, GB2312, and Big5. If the web server response specifies an encoding, it must be respected. (If the encoding is unspecified, ISO-8859-1 may be assumed, or a better guess may be made.) Filenames shall be normalized to a Unicode representation in the content management system. The specific examples of required encodings are essential for devising acceptance criteria. The added sentences state what the software needs to do, beyond not crashing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275231", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43125/" ] }
275,680
Why hasn't the dream of declarative programming been realized? What are some concrete obstacles getting in the way? For a simple example why can't I say sort(A) is defined by sort(A) in perm(A) && asc(sort(A)) and automatically get a sorting algorithm out of it. perm means permutations and asc means ascending.
Logical languages already do this. You can define sort similarly like you are doing it. The main problem is performance. Computers might be great at computing lots of stuff, but they are inherently dumb. Every "clever" decision a computer might make was programmed into it by a programmer. And this decision is usually described not by how the end result looks like, but by how to achieve, step by step, this end result. Imagine the story of a Golem . If you try to give him an abstract command, then at best, he will do it inefficiently and at worst, will hurt himself, you or someone else. But if you describe what you want in the greatest detail possible, you are guaranteed that task will be completed effectively and efficiently. It is the programmer's job to decide on what level of abstraction to use. For the application you are making, are you going to go high-level and describe it in abstract way and take the performance hit or go low and dirty, spend 10x more time on it, but get algorithm that is 1000x more performant?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275680", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
275,712
In C, there is no need to cast a void * to any other pointer type, it is always safely promoted. However, in C++, this is not the case. E.g., int *a = malloc(sizeof(int)); works in C, but not in C++. (Note: I know that you shouldn't use malloc in C++, or for that matter new , and should instead prefer smart pointers and/or the STL; this is asked purely out of curiosity) Why does the C++ standard not allow this implicit cast, while the C standard does?
Because implicit type conversions are usually unsafe, and C++ takes a more safe stance to typing than C does. C will usually allow implicit conversions, even if most chances are that the conversion is an error. That's because C assumes that the programmer knows exactly what they are doing, and if not, it is the programmer's problem, not the compiler's problem. C++ will usually disallow things that could potentially be errors, and require you to explicitly state your intention with a type cast. That's because C++ is trying to be programmer-friendly. You might ask how come it is friendly when it is actually requiring you to type more. Well, you see, any given line of code, in any program, in any programming language, will generally be read many more times than it will be written (*). So, ease of reading is much more important than ease of writing. And when reading, having any potentially unsafe conversions stand out by means of explicit type casts helps to understand what is going on and to have a certain level of certainty that what is happening is in fact what was intended to happen. Besides, the inconvenience of having to type the explicit cast is trivial compared to the inconvenience of hours upon hours of troubleshooting to find a bug which was caused by a mistaken assignment that you could have been warned about, but never were. (*) Ideally it will be written only once, but it will be read every time someone needs to review it to determine its suitability for reuse, and every time there is troubleshooting going on, and every time someone needs to add code near it, and then every time there is troubleshooting of nearby code, and so on. This is true in all cases except for "write-once, run, then throw away" scripts, and so it is no wonder that most scripting languages have a syntax which facilitates ease of writing with complete disregard to ease of reading. Ever thought that perl is completely incomprehensible? You are not alone. Think of such languages as "write-only" languages.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275712", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/136554/" ] }
275,734
I've been reading a lot about microservice architectures for server applications, and have been wondering how the internal network usage is not a bottleneck or a significant disadvantage compared to a monolith architecture. For the sake of precision, here are my interpretations of the two terms: Monolith architecture: One application in a single language that handles all of the functionality, data, etc. A load balancer distributes requests from the end user across multiple machines, each running one instance of our application. Microservice architecture: Many applications (microservices) handling a small portion of the functionality and data. Each microservice exposes a common API that is accessed through the network (as opposed to inter-process communication or shared memory on the same machine). API calls are stiched mostly on the server to produce a page, although perhaps some of this work is done by the client querying individual microservices. To my naive imagination, it seems like a microservices architecture uses slow network traffic as opposed to faster resources on the same machine (the memory and the disk). How does one ensure that API querying through the internal network will not slow down the overall response time?
Internal networks often use 1 Gbps connections, or faster. Optical fiber connections or bonding allow much higher bandwidths between the servers. Now imagine the average size of a JSON response from an API. How much of such responses can be transmitted over a 1 Gbps connection in one second? Let's actually do the math. 1 Gbps is 131 072 KB per second. If an average JSON response is 5 KB (which is quite a lot!), you can send 26 214 responses per second through the wire with just with one pair of machines . Not so bad, isn't it? This is why network connection is usually not the bottleneck. Another aspect of microservices is that you can scale easily. Imagine two servers, one hosting the API, another one consuming it. If ever the connection becomes the bottleneck, just add two other servers and you can double the performance. This is when our earlier 26 214 responses per second becomes too small for the scale of the app. You add other nine pairs, and you are now able to serve 262 140 responses. But let's get back to our pair of servers and do some comparisons. If an average non-cached query to a database takes 10 ms., you're limited to 100 queries per second. 100 queries. 26 214 responses. Achieving the speed of 26 214 responses per second requires a great amount of caching and optimization (if the response actually needs to do something useful, like querying a database; "Hello World"-style responses don't qualify). On my computer, right now, DOMContentLoaded for Google's home page happened 394 ms. after the request was sent. That's less than 3 requests per second. For Programmers.SE home page, it happened 603 ms. after the request was sent. That's not even 2 requests per second. By the way, I have a 100 Mbps internet connection and a fast computer: many users will wait longer. If the bottleneck is the network speed between the servers, those two sites could literally do thousands of calls to different APIs while serving the page. Those two cases show that network probably won't be your bottleneck in theory (in practice, you should do the actual benchmarks and profiling to determine the exact location of the bottleneck of your particular system hosted on a particular hardware). The time spent doing the actual work (would it be SQL queries, compression, whatever) and sending the result to the end user is much more important. Think about databases Usually, databases are hosted separately from the web application using them. This can raise a concern: what about the connection speed between the server hosting the application and the server hosting the database? It appears that there are cases where indeed, the connection speed becomes problematic, that is when you store huge amounts of data which don't need to be processed by the database itself and should be available right now (that is large binary files). But such situations are rare: in most cases, the transfer speed is not that big compared to the speed of processing the query itself. When the transfer speed actually matters is when a company is hosting large data sets on a NAS, and the NAS is accessed by multiple clients at the same time. This is where a SAN can be a solution. This being said, this is not the only solution. Cat 6 cables can support speeds up to 10 Gbps; bonding can also be used to increase the speed without changing the cables or network adapters. Other solutions exist, involving data replication across multiple NAS. Forget about speed; think about scalability An important point of a web app is to be able to scale. While the actual performances matter (because nobody wants to pay for more powerful servers), scalability is much more important, because it let you to throw additional hardware when needed. If you have a not particularly fast app, you'll lose money because you will need more powerful servers. If you have a fast app which can't scale, you'll lose customers because you won't be able to respond to an increasing demand. In the same way, virtual machines were a decade ago perceived as a huge performance issue. Indeed, hosting an application on a server vs. hosting it on a virtual machine had an important performance impact. While the gap is much smaller today, it still exists. Despite this performance loss, virtual environments became very popular because of the flexibility they give. As with the network speed, you may find that VM is the actual bottleneck and given your actual scale, you will save billions of dollars by hosting your app directly, without the VMs. But this is not what happens for 99.9% of the apps: their bottleneck is somewhere else, and the drawback of a loss of a few microseconds because of the VM is easily compensated by the benefits of hardware abstraction and scalability.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275734", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/165347/" ] }
275,816
I suspect major code review cover up in my team. Too many code reviews are merged without any comment. Seems to me like there's no such thing as a code review without a single comment. How can I as a team lead properly monitor that my team is doing a proper code review process and how can I help them to maximize the process' benefits? Update Thought people might want to know about any update. I tried a lot of suggestions that were given here. most were already in use. some helped a bit. However, the problem remained - some people continuously got bad code in when I was not looking. I found that code review monitoring is not as helpful as giving my team tools to make their code better to begin with. So I added a library named "jscpd" to detect copy pastes. The build failed on copy pastes. That eliminated one problem immediately. next we are going to try codeclimate. I am also doing a manual review on old code reviews once a sprint for half a day. I am converting todo s into issues/tickets - as I found out people are writing them, but they are never handled at a later point. I am also doing meetings with the entire teams to review code when it is appropriate. in general it feels like we are moving in the right direction.
I'm going to offer a different take from my fellow answerers. They are right - be involved if you want to see how things go. If you want more tracability, there are tools for that. But in my experience, I suspect that there's something else going on. Have you considered that your team may feel that the process is broken/stupid/ineffective for most commits? Remember, process is documenting what works well, not rules to obey . And as the team lead, you're there to help them be their best, not enforce rules. So in your retrospectives (if agile) or one on ones (if you're a manager) or in random impromptu hallway meetings (if you're a non-agile team lead and there's another manager doing one on ones), bring it up. Ask what people think of the code review process. How is it working? How is it not? Say you think it's maybe not benefiting the team as much as it could. Make sure you listen . You can do some advocacy for code reviews in these meetings, but it's better to listen to the feedback. Most likely, you'll find that either you team thinks that the "proper" process needs adjusting, or that there is some root cause (time pressure, lack of reviewers, Bob just commits his code so why can't we) to address. Forcing a tool on top of a broken process won't make the process any better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275816", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/95463/" ] }
275,891
I have recently been reading a book entitled Functional Programming in C# and it occurs to me that the immutable and stateless nature of functional programming accomplishes similar outcomes to dependency injection patterns and is possibly even a better approach, especially in regards to unit testing. I would be appreciative if anyone who has experience with both approaches could share their thoughts and experiences in order to answer the primary question: is Functional Programming a viable alternative to dependency injection patterns?
Dependency management is a big problem in OOP for the following two reasons: The tight coupling of data and code. Ubiquitous use of side effects. Most OO programmers consider the tight coupling of data and code to be wholly beneficial, but it comes with a cost. Managing the flow of data through the layers is an unavoidable part of programming in any paradigm. Coupling your data and code adds the additional problem that if you want to use a function at a certain point, you have to find a way get its object to that point. Use of side effects creates similar difficulties. If you use a side effect for some functionality, but want to be able to swap out its implementation, you pretty much have no other choice but to inject that dependency. Consider as an example a spammer program that scrapes web pages for email addresses then emails them. If you have a DI mindset, right now you're thinking of the services you will encapsulate behind interfaces, and which services will get injected where. I'll leave that design as an exercise for the reader. If you have an FP mindset, right now you're thinking of the inputs and outputs for the lowest layer of functions, like: Input a web page address, output the text of that page. Input a page's text, output a list of links from that page. Input a page's text, output a list of email addresses on that page. Input a list of email addresses, output a list of email addresses with duplicates removed. Input an email address, output a spam email for that address. Input a spam email, output the SMTP commands to send that email. When you think in terms of inputs and outputs, there are no function dependencies, only data dependencies. That's what makes them so easy to unit test. Your next layer up arranges for the output of one function to be fed into the input of the next, and can easily swap out the various implementations as needed. In a very real sense, functional programming naturally prods you to always invert your function dependencies, and therefore you usually don't have to take any special measures to do so after the fact. When you do, tools like higher-order functions, closures, and partial application make it easier to accomplish with less boilerplate. Note that it's not dependencies themselves that are problematic. It's dependencies that point the wrong way. The next layer up may have a function like: processText = spamToSMTP . emailAddressToSpam . removeEmailDups . textToEmailAddresses It's perfectly okay for this layer to have dependencies hard-coded like this, because its sole purpose is to glue the lower-layer functions together. Swapping an implementation is as simple as creating a different composition: processTextFancy = spamToSMTP . emailAddressToFancySpam . removeEmailDups . textToEmailAddresses This easy recomposition is made possible by a lack of side effects. The lower-layer functions are completely independent of each other. The next layer up may choose which processText is actually used based on some user config: actuallyUsedProcessText = if (config == "Fancy") then processTextFancy else processText Again, not an issue because all the dependencies point one way. We don't need to invert some dependencies in order to get them all pointing the same way, because pure functions already forced us to do so. Note that you could make this a lot more coupled by passing config down through to the lowest layer instead of checking it at the top. FP doesn't prevent you from doing this, but it does tend to make it a lot more annoying if you try.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275891", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/44497/" ] }
275,901
Most of the assignments in my school for the initial programming classes required me to use arrays. I work full time now, and I never used an array for any project that I have worked on. Even in the exisiting projects I never saw the use of arrays anywhere. In my opinion, List is easier to use and is a standard. Why do professors tell students to use arrays in their assignments? Is it just so that students understand the basics? Since most of the universities teach Java, this question is specific to Java.
Because arrays teach concepts like indexing and bounds, two fundamentally important concepts in computer programming. Lists are not a "standard." There is a wide variety of problem spaces for which arrays are a perfect fit.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275901", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/170706/" ] }
275,957
When thinking of agile software development and all the principles (SRP, OCP, ...) I ask myself how to treat logging. Is logging next to an implementation a SRP violation? I would say yes because the implementation should be also able to run without logging. So how can I implement logging in a better way? I've checked some patterns and came to a conclusion that the best way not to violate the principles in a user-defined way, but to use any pattern which is known to violate a principle is to use a decorator pattern. Let's say we have a bunch of components completely without SRP violation and then we want to add logging. component A component B uses A We want logging for A, so we create another component D decorated with A both implementing an interface I. interface I component L (logging component of the system) component A implements I component D implements I, decorates/uses A, uses L for logging component B uses an I Advantages: - I can use A without logging - testing A means I don't need any logging mocks - tests are simpler Disadvantage: - more components and more tests I know this seem to be another open discussion question, but I actually want to know if someone uses better logging strategies than a decorator or SRP violation. What about static singleton logger which are as default NullLogger and if syslog-logging is wanted, one change the implementation object at runtime?
I would say you're taking SRP far too seriously. If your code is tidy enough that logging is the only "violation" of SRP then you are doing better than 99% of all other programmers, and you should pat yourself on the back. The point of SRP is to avoid horrific spaghetti code where code that does different things is all mixed up together. Mixing logging with functional code doesn't ring any alarm bells for me.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/275957", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/167735/" ] }
276,465
Consider the example below. Any change to the ColorChoice enum affects all IWindowColor subclasses. Do enums tend to cause brittle interfaces? Is there something better than an enum to allow for more polymorphic flexibility? enum class ColorChoice { Blue = 0, Red = 1 }; class IWindowColor { public: ColorChoice getColor() const=0; void setColor( const ColorChoice value )=0; }; Edit: sorry for using color as my example, that's not what the question is about. Here is a different example that avoids the red herring and provides more info about what I mean by flexibility. enum class CharacterType { orc = 0, elf = 1 }; class ISomethingThatNeedsToKnowWhatTypeOfCharacter { public: CharacterType getCharacterType() const; void setCharacterType( const CharacterType value ); }; Further, imagine that handles to the appropriate ISomethingThatNeedsToKnowWhatTypeOfCharacter subclass are handed out by a factory design pattern. Now I have an API that cannot be extended in the future for a different application where the allowable character types are { human, dwarf }. Edit: Just to be more concrete about what I'm working on. I am designing a strong binding of this ( MusicXML ) specification and I am using enum classes to represent those types in the specification which are declared with xs:enumeration. I am trying to think about what happens when the next version (4.0) comes out. Could my class library work in a 3.0 mode and in a 4.0 mode? If the next version is 100% backward compatible, then maybe. But if enumeration values were removed from the specification then I'm dead in the water.
When used properly, enums are far more readable and robust than the "magic numbers" they replace. I don't normally see them making code more brittle. For instance: setColor() doesn't have to waste time checking if value is a valid color value or not. The compiler has already done that. You can write setColor(Color::Red) instead of setColor(0). I believe the enum class feature in modern C++ even lets you force people to always write the former instead of the latter. Usually not important, but most enums can be implemented with any size integral type, so the compiler can choose whatever size is most convenient without forcing you to think about such things. However, using an enum for color is questionable because in many (most?) situations there's no reason to limit the user to such a small set of colors; you might as well let them pass in any arbitrary RGB values. On the projects I work with, a small list of colors like this would only ever come up as part of a set of "themes" or "styles" that's supposed to act as a thin abstraction over concrete colors. I'm not sure what your "polymorphic flexibility" question is getting at. Enums don't have any executable code, so there's nothing to make polymorphic. Perhaps you're looking for the command pattern ? Edit: Post-edit, I'm still not clear on what kind of extendability you're looking for, but I still think the command pattern is the closest thing you'll get to a "polymorphic enum".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/276465", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/130255/" ] }
276,554
I just implemented a caching layer in my web application, and now I'm wondering how QA is supposed to test it, since caching is transparent to the user. One idea I have is to put logging in the methods that invoke the code that populate the cache, and record when an object is pulled from cache and when it requires recreation from the database, and then the testers could view the logs to see if, for example, a certain object is reloaded from the db every 10 minutes, instead of every page view. But can anyone suggest some better practices for this situation?
One question is whether the cache itself is really a requirement that should be tested by QA. Caching improves performance, so they could test the difference in performance to ensure it meets some requirement. But good idea to have some testing around caching, whoever is responsible for it. We used performance counters. If your cache system takes advantage of these, they are straightforward. If there is any way to get a count from the cache itself, that is another option. Using your approach is nice too. If any of these are wrapped in automated tests that check the results, then no one has to look through logs to find answers.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/276554", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59124/" ] }
276,585
So I was reading a question about forcing the C# garbage collector to run where almost every single answer is the same: you can do it, but you shouldn't - except for some very rare cases . Sadly, nobody there elaborates on what are such cases. Can you tell me in what sort of scenario it is actually a good or reasonable idea to force garbage collection? I'm not asking for C# specific cases but rather, all programing languages that have a garbage collector. I know that you can't force GC on all languages, like Java, but let's suppose you can.
You really can't make blanket statements about appropriate way to use all GC implementations. They vary wildly. So I'll speak to the .NET one which you originally referred to. You must know the behaviour of the GC pretty intimately to do this with any logic or reason. The only advice on collection I can give is: Never do it. If you truly know the intricate details of the GC, you'll not need my advice so it won't matter. If you don't already know with 100% confidence it will help, and have to look online and find an answer like this: You should not be calling GC.Collect , or alternatively: You should go learn the details of how the GC works inside and out, and only then will you know the answer . There is one safe place it makes some sense to use GC.Collect : GC.Collect is an API available that you can use for profiling timings of things. You could profile one algorithm, collect, and profile another algorithm immediately afterwards knowing GC of the first algo wasn't occurring during your second one skewing the results. This sort of profiling is the single time I would ever suggest manually collecting to anyone. Contrived Example Anyway One possible use case is if you load really large things, they'll end up in the Large Object Heap which will go straight to Gen 2, though again Gen 2 is for long lived objects because it collects less frequently. If you know that you are loading short lived objects into Gen 2 for any reason, you could clear them out more quickly to keep your Gen 2 smaller and it's collections faster. This is the best example I could come up with, and it's not good - the LOH pressure you're building here would cause more frequent collections, and collections are so frequent as it is - chances are it would be clearing out the LOH just as fast as you were blowing it out with temporary objects. I simply don't trust myself to presume a better collection frequency than the GC itself - tuned by people far far smarter than I. So let's talk about some of the semantics and mechanisms in the .NET GC... or.. Everything I think I know about the .NET GC Please, anyone who finds errors here - do correct me. Much of the GC is well known to be black magic and while I tried to leave out details I was uncertain of, I probably still got some things wrong. Below is purposely missing numerous details I'm not certain about, as well as a far larger body of information I'm simply unaware of. Use this information at your own risk. GC Concepts The .NET GC occurs at inconsistent times, which is why it's called "non-deterministic", this means you can't rely on it to occur at specific times. It's also a generational garbage collector, which means it partitions your objects into how many GC passes they've lived through. Objects in Gen 0 heap have lived through 0 collections, these have been newly made so recently no collection has occurred since their instantiation. Objects in your Gen 1 heap have lived through one collection pass, and likewise objects in your Gen 2 heap have lived through 2 collection passes. Now it's worth noting the reason it qualifies these specific generations and partitions accordingly. The .NET GC only recognizes these three generations, because the collection passes that go over these three heaps are all slightly different. Some objects may survive collection passes thousands of times. The GC merely leaves these on the other side of the Gen 2 heap partition, there's no point in partitioning them anywhere further because they're actually Gen 44; the collection pass on them is the same as everything in Gen 2 heap. There are semantic purposes to these specific generations, as well as implemented mechanisms that honor these, and I'll get to those in a moment. What's in a collection The basic concept of a GC collection pass is that it checks each object in a heap space to see if there are still live references (GC roots) to these objects. If a GC root is found for an object, it means currently executing code can still possible reach and use that object, so it cannot be deleted. However if a GC root is not found for an object, it means the running process no longer needs the object, so it can remove it to free up memory for new objects. Now after it's finished cleaning up a bunch of objects and leaving some alone, there will be an unfortunate side effect: Free space gaps between live objects where the dead ones were removed. This memory fragmentation if left alone would simply waste memory, so collections will typically do what's called "compaction" where they take all the live objects left and squeeze them together in the heap so the free memory is contiguous on one side of the heap for Gen 0. Now given the idea of 3 heaps of memory, all partitioned by the number of collection passes they've lived through, let's talk about why these partitions exist. Gen 0 Collection Gen 0 being the absolute newest objects, tends to be very small - so you can safely collect it very frequently . The frequency ensures the heap stays small and the collections are very fast because they are collecting over such a small heap. This is based more or less on a heuristic that claims: A large majority of temporary objects which you create, are very temporary, so temporary they'll no longer be used or referenced almost immediately after use, and thus can be collected. Gen 1 Collection Gen 1 being objects that didn't fall into this very temporary category of objects, may still be rather short lived, because still- a vast portion of the objects created are not used for long. Therefore Gen 1 collects rather frequently as well, again keeping it's heap small so it's collections are fast. However the assumption is less of it's objects are temporary than Gen 0, so it collects less frequently than Gen 0 I will say I frankly don't know the technical mechanisms that differ between Gen 0's collection pass and Gen 1's, if there are any at all other than the frequency they collect. Gen 2 Collection Gen 2 now must be the mother of all heaps right? Well, yes, that's more or less right. It's where all your permanent objects live - the object your Main() lives in for instance, and everything that Main() references because those will be rooted until your Main() returns at the end of your process. Given that Gen 2 is a bucket for basically everything the other generations couldn't collect, it's objects are largely permanent, or long lived at the least. So recognizing very little of what's in Gen 2 will actually be something that can be collected, it doesn't have need to collect frequently. This allows it's collection to also be slower, since it executes so much less frequent. So this is basically where they've tacked on all the extra behaviours for odd scenarios, because they have the time to execute them. Large Object Heap One example of the extra behaviours of Gen 2 is that it also does the collection on the Large Object Heap. Up until now I've been talking entirely about the Small Object Heap, but the .NET runtime allocates things of certain sizes to a separate heap because of what I referred to as compaction above. Compaction requires moving objects around when collections finish on the Small Object Heap. If there's a living 10mb object in Gen 1, it's going to take far longer for it to complete the compaction after collection, thus slowing down Gen 1's collection. So that 10mb object is allocated to the Large Object Heap, and collected during Gen 2 which runs so infrequently. Finalization Another example is objects with finalizers. You put a finalizer on an object that references resources beyond the scope of .NETs GC (unmanaged resources). The finalizer is the only way the GC gets to demand an unmanaged resource is collected - you implement your finalizer to do the manual collection/removal/release of the unmanaged resource to ensure it doesn't leak from your process. When the GC gets to executing your objects finalizer, then your implementation will clear the unmanaged resource, making the GC capable of removing your object without risking a resource leak. The mechanism with which finalizers do this is by being referenced directly in a finalization queue. When the runtime allocates an object with a finalizer, it adds a pointer to that object to the finalization queue, and locks your object in place (called pinning) so compaction won't move it which would break the finalization queue reference. As collection passes occur, eventually your object will be found to no longer have a GC root, but the finalization must be executed before it can be collected. So when the object is dead, the collection will move it's reference from the finalization queue and place a reference to it on what's known as the "FReachable" queue. Then the collection continues on. At another "non-deterministic" time in the future, a separate thread known as the Finalizer thread will go through the FReachable queue, executing the finalizers for each of the objects referenced. After it's finished, the FReachable queue is empty, and it has flipped a bit on the header of each object that says they don't need finalization (This bit can also be flipped manually with GC.SuppressFinalize which is common in Dispose() methods), I also suspect it has unpinned the objects, but don't quote me on that. The next collection that comes around on whatever heap this object is in, will finally collect it. Gen 0 collections don't even pay attention to objects with that finalization-needed bit on, it automatically promotes them, without even checking for their root. An unrooted object needing finalization in Gen 1, will get tossed on the FReachable queue, but the collection doesn't do anything else with it, so it lives into Gen 2. In this way, all objects which have a finalizer, and don't GC.SuppressFinalize will be collected in Gen 2.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/276585", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13833/" ] }
276,859
In Java 8's java.util.function package, we have: Function : Takes one argument, produces one result. Consumer : Takes one argument, produces nothing. Supplier : Takes no argument, produces one result. ... : Other cases handling primitives, 2 arguments, etc... But I need to handle the " takes no argument, produces nothing " case. There is nothing for this in java.util.function . So, the question is: What is the name of ' a function that takes no argument and returns nothing '? In Java 8, its definition would be: @FunctionalInterface public interface InsertANameHere { void execute(); } Executor already exists and has another purpose : " An object that executes submitted Runnable tasks ". The signature doesn't match ( execute(Runnable):void ) and is not even a functional interface . Runnable exists, but it is strongly linked to the threading context: The package is java.lang , not java.util.function . The Javadoc states : " The Runnable interface should be implemented by any class whose instances are intended to be executed by a thread ". The name "Runnable" suggest some running code inside a thread.
Java's choice to do it that way with a separate name for every arity was not exactly worth emulating. However, if you must for the sake of consistency, or if you're writing very generic library code, Konrad's suggestions are good. I might throw Procedure into the ring. Using a pseudo-functional paradigm doesn't mean normal naming principles should go out the window. Interfaces should almost always be named after what they do , not after some generic syntactic idea. If the functions are placed into an undo stack, they should be named UndoFunction . If they are called from GUI events, they should be named GUIEventHandler .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/276859", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/104741/" ] }
276,867
Suppose I want to write a library that deals with vectors and matrices. Is it possible to bake the dimensions into the types, so that operations of incompatible dimensions generate an error at compile time? For example I would like the signature of dot product to be something like dotprod :: Num a, VecDim d => Vector a d -> Vector a d -> a where the d type contains a single integer value (representing the dimension of these Vectors). I suppose this could be done by defining (by hand) a separate type for each integer, and group them in a type class called VecDim . Is there some mechanism to "generate" such types? Or perhaps some better/simpler way of achieving the same thing?
To expand on @KarlBielefeldt's answer, here's a full example of how to implement Vectors - lists with a statically-known number of elements - in Haskell. Hold on to your hat... {-# LANGUAGE DataKinds #-} {-# LANGUAGE ExistentialQuantification #-} {-# LANGUAGE DeriveFoldable #-} {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE DeriveTraversable #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE TypeFamilies #-} import Prelude hiding (foldr, zipWith) import qualified Prelude import Data.Type.Equality import Data.Foldable import Data.Traversable As you can see from the long list of LANGUAGE directives, this'll only work with a recent version of GHC. We need a way of representing lengths within the type system. By definition, a natural number is either zero ( Z ) or it's the successor of some other natural number ( S n ). So, for example, the number 3 would be written S (S (S Z)) . data Nat = Z | S Nat With the DataKinds extension , this data declaration introduces a kind called Nat and two type constructors called S and Z - in other words we have type-level natural numbers. Note that the types S and Z don't have any member values - only types of kind * are inhabited by values. Now we introduce a GADT representing vectors with a known length. Note the kind signature: Vec requires a type of kind Nat (ie a Z or an S type) to represent its length. data Vec :: Nat -> * -> * where VNil :: Vec Z a VCons :: a -> Vec n a -> Vec (S n) a deriving instance (Show a) => Show (Vec n a) deriving instance Functor (Vec n) deriving instance Foldable (Vec n) deriving instance Traversable (Vec n) The definition of vectors is similar to that of linked lists, with some extra type-level information about its length. A vector is either VNil , in which case it has a length of Z (ero), or it's a VCons cell adding an item to another vector, in which case its length is one more than the other vector ( S n ). Note that there's no constructor argument of type n . It's just used at compile time to track lengths, and will be erased before the compiler generates machine code. We've defined a vector type which carries around static knowledge of its length. Let's query the type of a few Vec s to get a feel for how they work: ghci> :t (VCons 'a' (VCons 'b' VNil)) (VCons 'a' (VCons 'b' VNil)) :: Vec ('S ('S 'Z)) Char -- (S (S Z)) means 2 ghci> :t (VCons 13 (VCons 11 (VCons 3 VNil))) (VCons 13 (VCons 11 (VCons 3 VNil))) :: Num a => Vec ('S ('S ('S 'Z))) a -- (S (S (S Z))) means 3 The dot product proceeds just as it would for a list: -- note that the two Vec arguments are declared to have the same length vap :: Vec n (a -> b) -> Vec n a -> Vec n b vap VNil VNil = VNil vap (VCons f fs) (VCons x xs) = VCons (f x) (vap fs xs) zipWith :: (a -> b -> c) -> Vec n a -> Vec n b -> Vec n c zipWith f xs ys = fmap f xs `vap` ys dot :: Num a => Vec n a -> Vec n a -> a dot xs ys = foldr (+) 0 $ zipWith (*) xs ys vap , which 'zippily' applies a vector of functions to a vector of arguments, is Vec 's applicative <*> ; I didn't put it in an Applicative instance because it gets messy . Note also that I'm using the foldr from the compiler-generated instance of Foldable . Let's try it out: ghci> let v1 = VCons 2 (VCons 1 VNil) ghci> let v2 = VCons 4 (VCons 5 VNil) ghci> v1 `dot` v2 13 ghci> let v3 = VCons 8 (VCons 6 (VCons 1 VNil)) ghci> v1 `dot` v3 <interactive>:20:10: Couldn't match type ‘'S 'Z’ with ‘'Z’ Expected type: Vec ('S ('S 'Z)) a Actual type: Vec ('S ('S ('S 'Z))) a In the second argument of ‘dot’, namely ‘v3’ In the expression: v1 `dot` v3 Great! You get a compile-time error when you try to dot vectors whose lengths don't match. Here's an attempt at a function to concatenate vectors together: -- This won't compile because the type checker can't deduce the length of the returned vector -- VNil +++ ys = ys -- (VCons x xs) +++ ys = VCons x (concat xs ys) The length of the output vector would be the sum of the lengths of the two input vectors. We need to teach the type checker how to add Nat s together. For this we use a type-level function : type family (n :: Nat) :+: (m :: Nat) :: Nat where Z :+: m = m (S n) :+: m = S (n :+: m) This type family declaration introduces a function on types called :+: - in other words, it's a recipe for the type checker to calculate the sum of two natural numbers. It's defined recursively - whenever the left operand is greater than Z ero we add one to the output and reduce it by one in the recursive call. (It's a good exercise to write a type function which multiplies two Nat s.) Now we can make +++ compile: infixr 5 +++ (+++) :: Vec n a -> Vec m a -> Vec (n :+: m) a VNil +++ ys = ys (VCons x xs) +++ ys = VCons x (concat xs ys) Here's how you use it: ghci> VCons 1 (VCons 2 VNil) +++ VCons 3 (VCons 4 VNil) VCons 1 (VCons 2 (VCons 3 (VCons 4 VNil))) So far so simple. What about when we want to do the opposite of concatenation and split a vector in two? The lengths of the output vectors depend on the runtime value of the arguments. We'd like to write something like this: -- this won't work because there aren't any values of type `S` and `Z` -- split :: (n :: Nat) -> Vec (n :+: m) a -> (Vec n a, Vec m a) but unfortunately Haskell won't let us do that. Allowing the value of the n argument to appear in the return type (this is commonly called a dependent function or pi type ) would require "full-spectrum" dependent types, whereas DataKinds only gives us promoted type constructors. To put it another way, the type constructors S and Z don't appear at the value level. We'll have to settle for singleton values for a run-time representation of a certain Nat .* data Natty (n :: Nat) where Zy :: Natty Z -- pronounced 'zed-y' Sy :: Natty n -> Natty (S n) -- pronounced 'ess-y' deriving instance Show (Natty n) For a given type n (with kind Nat ), there is precisely one term of type Natty n . We can use the singleton value as a run-time witness for n : learning about a Natty teaches us about its n and vice versa. split :: Natty n -> Vec (n :+: m) a -> -- the input Vec has to be at least as long as the input Natty (Vec n a, Vec m a) split Zy xs = (Nil, xs) split (Sy n) (Cons x xs) = let (ys, zs) = split n xs in (Cons x ys, zs) Let's take it for a spin: ghci> split (Sy (Sy Zy)) (VCons 1 (VCons 2 (VCons 3 VNil))) (VCons 1 (VCons 2 VNil), VCons 3 VNil) ghci> split (Sy (Sy Zy)) (VCons 3 VNil) <interactive>:116:21: Couldn't match type ‘'S ('Z :+: m)’ with ‘'Z’ Expected type: Vec ('S ('S 'Z) :+: m) a Actual type: Vec ('S 'Z) a Relevant bindings include it :: (Vec ('S ('S 'Z)) a, Vec m a) (bound at <interactive>:116:1) In the second argument of ‘split’, namely ‘(VCons 3 VNil)’ In the expression: split (Sy (Sy Zy)) (VCons 3 VNil) In the first example, we successfully split a three-element vector at position 2; then we got a type error when we tried to split a vector at a position past the end. Singletons are the standard technique for making a type depend on a value in Haskell. * The singletons library contains some Template Haskell helpers to generate singleton values like Natty for you. Last example. What about when you don't know the dimensionality of your vector statically? For example, what if we're trying to build a vector from run-time data in the form of a list? You need the type of the vector to depend on the length of the input list. To put it another way, we can't use foldr VCons VNil to build a vector because the type of the output vector changes with each iteration of the fold. We need to keep the length of the vector a secret from the compiler. data AVec a = forall n. AVec (Natty n) (Vec n a) deriving instance (Show a) => Show (AVec a) fromList :: [a] -> AVec a fromList = Prelude.foldr cons nil where cons x (AVec n xs) = AVec (Sy n) (VCons x xs) nil = AVec Zy VNil AVec is an existential type : the type variable n does not appear in the return type of the AVec data constructor. We're using it to simulate a dependent pair : fromList can't tell you the length of the vector statically, but it can return something you can pattern-match on to learn the length of the vector - the Natty n in the first element of the tuple. As Conor McBride puts it in a related answer , "You look at one thing, and in doing so, learn about another". This is a common technique for existentially quantified types. Because you can't actually do anything with data for which you don't know the type - try writing a function of data Something = forall a. Sth a - existentials often come bundled up with GADT evidence which allows you to recover the original type by performing pattern-matching tests. Other common patterns for existentials include packaging up functions to process your type ( data AWayToGetTo b = forall a. HeresHow a (a -> b) ) which is a neat way of doing first-class modules, or building-in a type class dictionary ( data AnOrd = forall a. Ord a => AnOrd a ) which can help emulate subtype polymorphism. ghci> fromList [1,2,3] AVec (Sy (Sy (Sy Zy))) (VCons 1 (VCons 2 (VCons 3 Nil))) Dependent pairs are useful whenever the static properties of data depend on dynamic information not available at compile time. Here's filter for vectors: filter :: (a -> Bool) -> Vec n a -> AVec a filter f = foldr (\x (AVec n xs) -> if f x then AVec (Sy n) (VCons x xs) else AVec n xs) (AVec Zy VNil) To dot two AVec s, we need to prove to GHC that their lengths are equal. Data.Type.Equality defines a GADT which can only be constructed when its type arguments are the same: data (a :: k) :~: (b :: k) where Refl :: a :~: a -- short for 'reflexivity' When you pattern-match on Refl , GHC knows that a ~ b . There are also a few functions to help you work with this type: we'll be using gcastWith to convert between equivalent types, and TestEquality to determine whether two Natty s are equal. To test the equality of two Natty s, we're going to need to make use of the fact that if two numbers are equal, then their successors are also equal ( :~: is congruent over S ): congSuc :: (n :~: m) -> (S n :~: S m) congSuc Refl = Refl Pattern matching on Refl on the left-hand side lets GHC know that n ~ m . With that knowledge, it's trivial that S n ~ S m , so GHC lets us return a new Refl right away. Now we can write an instance of TestEquality by straightforward recursion. If both numbers are zero, they are equal. If both numbers have predecessors, they are equal iff the predecessors are equal. (If they're not equal, just return Nothing .) instance TestEquality Natty where -- testEquality :: Natty n -> Natty m -> Maybe (n :~: m) testEquality Zy Zy = Just Refl testEquality (Sy n) (Sy m) = fmap congSuc (testEquality n m) -- check whether the predecessors are equal, then make use of congruence testEquality Zy _ = Nothing testEquality _ Zy = Nothing Now we can put the pieces together to dot a pair of AVec s of unknown length. dot' :: Num a => AVec a -> AVec a -> Maybe a dot' (AVec n u) (AVec m v) = fmap (\proof -> gcastWith proof (dot u v)) (testEquality n m) First, pattern match on the AVec constructor to pull out a runtime representation of the vectors' lengths. Now use testEquality to determine whether those lengths are equal. If they are, we'll have Just Refl ; gcastWith will use that equality proof to ensure that dot u v is well-typed by discharging its implicit n ~ m assumption. ghci> let v1 = fromList [1,2,3] ghci> let v2 = fromList [4,5,6] ghci> let v3 = fromList [7,8] ghci> dot' v1 v2 Just 32 ghci> dot' v1 v3 Nothing -- they weren't the same length Note that, since a vector without static knowledge of its length is basically a list, we've effectively re-implemented the list version of dot :: Num a => [a] -> [a] -> Maybe a . The difference is that this version is implemented in terms of the vectors' dot . Here's the point: before the type checker will allow you to call dot , you must have tested whether the input lists have the same length using testEquality . I am prone to getting if -statements the wrong way round, but not in a dependently-typed setting! You can't avoid using existential wrappers at the edges of your system, when you're dealing with runtime data, but you can use dependent types everywhere inside your system and keep the existential wrappers at the edges, when you perform input validation. Since Nothing is not very informative, you could further refine the type of dot' to return a proof that the lengths aren't equal (in the form of evidence that their difference is not 0) in the failure case. This is pretty similar to the standard Haskell technique of using Either String a to possibly return an error message, although a proof term is far more computationally useful than a string! Thus ends this whistle-stop tour of some of the techniques that are common in dependently-typed Haskell programming. Programming with types like this in Haskell is really cool, but really awkward at the same time. Breaking up all your dependent data into lots of representations which mean the same thing - Nat the type, Nat the kind, Natty n the singleton - is really quite cumbersome, despite the existence of code-generators to help with the boilerplate. There are also presently limitations on what can be promoted to the type level. It's tantalising though! The mind boggles at the possibilities - in the literature there are examples in Haskell of strongly typed printf , database interfaces, UI layout engines... If you want some further reading, there's a growing body of literature about dependently typed Haskell, both published and on sites like Stack Overflow. A good starting point is the Hasochism paper - the paper goes through this very example (among others), discussing the painful parts in some detail. The Singletons paper demonstrates the technique of singleton values (such as Natty ). For more information about dependent typing in general, the Agda tutorial is a good place to start; also, Idris is a language in development that's (roughly) designed to be "Haskell with dependent types".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/276867", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100284/" ] }
277,011
I am researching architectural patterns for an application I'm developing and a microservice approach seems like it would be a good choice but I am not sure how to handle interactions between the services. The application primarily deals with users, profiles owned by users, photos, and tags that represent one to many profiles in a photo. There would conceivably be methods to return photos uploaded by a user, return photos that contain a certain tagged profile, etc. This is my first stab at designing a microservice-based architecture and I come from a monolithic-esque domain model inspired history. In that world, controllers would stitch these domain objects together but I am having trouble wrapping my head around how this would work in a microservcey way.
Usually, services call other services when they need to access their data. Each piece of data should belong to a particular service which will be the only entry point to accessing this data and modifying it. Some services will be simple and usually correspond closely to your domain model (e.g. a service for handling users) while others will be high-level and use data from other services (e.g. displaying a list of photos together with information about the users who uploaded them). In your use case, you should start from the outside and think of what operations you want to make available to your user via an API (if it's a backend service) or what operations should be available in the GUI if it's a web application. Note that the GUI part is often a regular application with its own controllers: operations may be called via REST (like in AngularJS), but these endpoints are desinged only for the GUI application's use and are not microservices in the common sense. Suppose you want to display photos together with information about uploaders. You could have a user service that returns information about a user given the user's ID and a photo service which can list photos (e.g. by searching by some criteria). The list of photos would contain for each photo the ID of the uploading user. This way these two services are not coupled - the photo service only knows about user IDs but nothing about the user data themselves. On top of these two services you could create a third service with an operation such as "list photos with information about uploaders" which would call the two other services and combine the data they return. Alternatively, this operation could be performed by your web application instead of a service.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277011", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29845/" ] }
277,115
What would you call an API that is HTTP-based, uses URI to name resources and HTTP verbs (PUT, POST, DELETE, GET...) to manipulate those resources? According to Roy Fielding's complaints it is not REST, because there is no hypermedia. Internally, in my team, everyone calls it "REST API". I call it "REST-like" but it is not descriptive and its meaning is fuzzy. I'm quite confused about it, since there is huge disagreement about REST. I don't want take part in flame wars, but just use correct terms.
Call it an HTTP API . It conforms to HTTP standards, and doesn't have anything else layered on top (e.g. SOAP). The HTTP standards define resources, verbs, headers, content negotiation, etc. REST (REpresentational State Transfer) is an architecture with requirements that happen to be amenable to existing HTTP standards, but HTTP works all on its own. In my experience, 90% of "REST HTTP APIs" should call themselves "just" an HTTP API. Don't be ashamed to leave off the REST label. As with microservices and non-relational databases, you don't have to have a RESTful API to be cool. Roy set out to create the longest-lived, most backwards compatible, networked application architecture that he could. He did a good job. But not everything needs 40+ years of compatibility.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277115", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/167529/" ] }
277,343
I'm working on a set of web services for a mobile client, and the requirements call for a unique device id to be included with all requests, to be stored in certain requests, and used to filter results in others. A suggestion was made that it be put in a custom HTTP header since it will be included with all requests, so I began to wonder what criteria might be used to determine if a given piece of data belongs in a header or along with other data in the request body. Is there any such criteria?
When the information is important, you should put it into the body. Why? proxy servers are allowed to modify headers. Many are configured to strip any headers they don't know. This, however, only applies when you use unencrypted HTTP. When you use HTTPS, the proxy can't change the headers because they are encrypted. When you use a webservice, you usually do so for interoperability with other devices, services and tools. Most APIs and tools which work with webservices can easily change requests, but many make it difficult or even impossible to add custom headers. This, of course, only applies when interoperability is a concern. But when you don't care, you might want to ask yourself why you are using webservices in the first place instead of just building your own protocol on raw TCP.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277343", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34183/" ] }
277,462
I am trying to create a flexible ACL framework in Java for my application. Many ACL frameworks are built on a whitelist of rules, where a rule is in the form of owner:action:resource . For example, "JOHN can VIEW resource FOOBAR-1" "MARY can VIEW resource FOOBAR-1" "MARY can EDIT resource FOOBAR-1" This is attractive because the rules can easily be serialized/persisted to a database. But my application has complex business logic. For example, "All users in department 1 with over 5 years of seniority can VIEW resource FOOBAR-1, else not authorized" "All users in department 2, if the date is after 03/15/2016, can VIEW resource FOOBAR-2, else not authorized" Upon first thought, it would be a nightmare to devise a database schema that could handle infinitely complex rules such as these. Therefore, it seems as though I would need to "bake" them into the compiled application, evaluate them for each user, and then produce owner:action:resource rules as a result of the evaluation. I want to avoid baking the logic into the compiled application. So, I was thinking of representing a rule in the form of predicate :action:resource , where the predicate is a boolean expression that determines whether a user is allowed. The predicate would be a string of a JavaScript expression that could be evaluated by Java's Rhino engine. For example, return user.getDept() == 1 && user.seniority > 5; In doing so, the predicates could easily be persisted to the database. Is this clever ? Is this sloppy ? Is this gimmicky ? Is this over-engineered ? Is this safe (apparently, Java can sandbox the Rhino engine).
Piping dynamic data into an interpreter of your implementation language is usually a bad idea, since it escalates the potential for data corruption into a potential for malicious application takeover. In other words, you are going out of your way to create a code injection vulnerability. Your problem can be better solved by a rules engine or maybe a domain-specific language (DSL) . Look those concepts up, there is no need to reinvent the wheel.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277462", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/172568/" ] }
277,594
I have recently learned about several cool open source add-on tools and libraries for Microsoft Visual Studio. The tools just help you be more productive; the libraries would get linked into the corporate code base. I have listed all these cool tools and libraries on a spreadsheet and I'm going to run down the type of license each one is under. So far, amongst my cool libraries, I see MIT , BSD , Apache License Version 2.0 . However, there could be more in the future. How can I find out (or better yet, can you just list) the popular licenses which are NOT compatible with internal corporate use (not to be distributed outside the company)? And for an extra thank you, can you say or point to an explanation. I'm not a lawyer and reading the fine print of these licenses gives me a headache. I want to be prepared to explain why using the allowable licenses is OK.
In general, the legalities in licensing that can occur as a result of the use of open-source software boil down to two factors: Commercial use, and Distribution. Distribution means "conferring" software to a third party outside the organization. Since you say you only use the software internally, legal mechanisms like "copyleft" (the term used for the viral portion of the GPL license) probably don't apply to your organization. Commercial use (or other arbitrary restrictions) are a different matter. Simply read the license carefully, and determine if any of those restrictions apply to your organization. In particular, permissive licenses such as Apache, MIT and BSD have few, if any, restrictive conditions; these licenses are ideal for "internal use." It sounds like your company is reluctant to use open-source software. Many companies believe that they must completely own their software and other intellectual property, and so they have policies that state that their own developers must write every line of code. Clarifying the meaning of open-source licenses will not necessarily change their minds.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277594", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/79695/" ] }
277,620
Why do they say that XML provides type safety and how is it expressed in the XML itself? How is it different from JSON (for example) which (as I understand) is not type safe?
Because of the XML Schema Definition (XSD). With XML, you can have an additional file which describes the schema. It indicates, for example, that the element /a/b is an array and contains from 1 to 10 elements, or that the element /a/c is an integer. You can find an example of an XSD here . Validation of a given XML file through an XSD is supported by many languages . For example, a .NET application may request an XML file from an untrusted source and check that it matches the XSD ; then, it can save it to a Microsoft SQL Server database, which can in turn contains an XSD and do the check again (to ensure that any client which have access to the database complies). XSD is not the only language. If you've done web development, you certainly heard about Document Type Definition (DTD)—a markup language which defines the structure of XML and is used especially in validation of HTML-related content. While it cannot do all things XSD can, such as ensure that an element or an attribute contains an integer number, it can still perform a bunch of structure checks. RELAX NG has a benefit of being relatively simple compared to other languages and can be written in a more compact form than XML. Schematron is another “rule-based validation language for making assertions about the presence or absence of patterns in XML trees” ( Wikipedia ) and presents a slightly different approach, based on XPath assertions. Similar initiatives for JSON are not that popular (especially, I believe, in Microsoft-centric corporate world). One of the reasons is that JSON is intended for situations where the data structure is rather basic (i.e. can be expressed as a tree, without the need for attributes, for instance) and don't necessarily need to be validated. An excellent example is a REST API used by a dynamically-typed language: the client is very easy and fast to implement, the API is trusted not to change, the client can easily deal with specific leafs where validation is necessary (for instance check that /something/percentage is an actual number and is in 0..100 range).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277620", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/157760/" ] }
277,693
If every path through a program is tested, does that guarantee finding all bugs? If not, why not? How could you go through every possible combination of program flow and not find the problem if one exists? I hesitate to suggest that "all bugs" can be found, but maybe that is because path coverage isn't practical (as it is combinatorial) so it isn't ever experienced? Note: this article gives a quick summary of coverage types as I think about them.
If every path through a program is tested, does that guarantee finding all bugs? No If not, why not? How could you go through every possible combination of program flow and not find the problem if one exists? Because even if you test all possible paths , you still haven't tested them with all possible values or all possible combinations of values . For example (pseudocode): def Add(x as Int32, y as Int32) as Int32: return x + y Test.Assert(Add(2, 2) == 4) //100% test coverage Add(MAXINT, 5) //Throws an exception, despite 100% test coverage It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies, just like the alchemist of yore, who continued to refine his chrysocosmic purifications. -- E. W. Dijkstra (Emphasis added. Written in 1988. It's been considerably more than 2 decades now.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277693", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
277,748
This question is not about the difference between SQL and NoSQL. I am looking for some rationale for something that really does not make sense to me at the moment (maybe because of my lack of understanding or appreciation). We have started a new project from scratch using MVC5, Entity framework 6 code first and SQL Server 2008. When the architect reviewed the database schema it was stated that all foreign keys and other such constraints should be removed as this is “business logic” and should be applied within the business layer of the application code. My opinion is that foreign keys form part of data/referential integrity and do not really mimic business logic. I see business logic as more the process and validation which controls what/when/how/why references are applied. I can kind of understand that unique constraints are arguably business processes, but for me this just complements the logic and forms part of the integrity. A second argument is the aim is to adopt a NoSQL approach to the data. I found this really unusual and unorthodox: considering the use of SQL-Server 2008, the need for reporting, the data not scaling to terabytes and the lack of consideration towards technologies such as Mongo, Raven, etc. Has anyone came across such a scenario before? Why would anyone adopt a NoSQL approach in an SQL Server designed for referential data and not want foreign keys?
When he reviewed the database schema he stated that all foreign keys and other such constraints should be removed as this is business logic and should be applied within the business layer. Then he's an idiot, and some excerpt from your codebase is likely to end up on The Daily WTF someday. You're absolutely right that his approach doesn't make sense, and frankly neither does his explanation. Try explaining to him that referential integrity constraints are not "business logic"; they're a correctness standard with its own built-in verification. Business logic is about what you do with the data; integrity is about ensuring that the data itself is not corrupt. And if that doesn't work... well, he's in charge. You can either go along with his plan and try to mitigate the damage somewhat, or start looking for a better place to work. (Or both.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277748", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/172951/" ] }
277,778
In modern web development I'm coming across this pattern ever more often. It looks like this: <div class="table"> <div class="row"> <div class="cell"></div> <div class="cell"></div> <div class="cell"></div> </div> </div> And in CSS there is something like: .table { display: table; } .row { display: table-row; } .cell { display: table-cell; } * (Class name are illustrative only; in real life those are normal class names reflecting what the element is about) I even recently tried doing this myself because... you know, everyone's doing it. But I still don't get it. Why are we doing this? If you need a table, then just make a blasted <table> and be done with it. Yes, even if it's for layout. That's what tables are for - laying out stuff in tabular fashion. The best explanation that I have is that by now everyone has heard the mantra of "don't use tables for layout", so they follow it blindly. But they still need a table for layout (because nothing else has the expanding capabilities of a table), so they make a <div> (because it's not a table!) and then add CSS that makes it a table anyway. For all the world this looks to me like putting arbitrary unnecessary obstacles in your way and then doing extra work to circumvent them. The original argument for moving away from tables for layout was that it's hard to modify a tabular layout afterwards. But modifying a "faux-table" layout is just as hard, and for the same reasons. In fact, in practice modifying a layout is always hard, and it's almost never enough to just change the CSS, if you want to do something more serious than minor tweaks. You will need to understand and change HTML structure for serious design changes. And tables don't make the job any harder or easier than divs. In fact, the only way I see that tables could make a layout difficult to modify, is if you ab used them and created an ungodly mess. You can do that with divs too. So... in an attempt to change this from a rant into a coherent question: what am I missing? What are the actual benefits of using a "faux-table" over a real one? About the duplicate link: This isn't a suggestion to use another tag or something. This is a question about using a <table> vs display:table .
This is a common pattern for making responsive tables. Tabular data is tricky to display on mobiles since the page will either be zoomed in to read text, meaning tables go off the side of the page and the user has to scroll backwards and forwards to read the table, or the page will be zoomed out, usually meaning that the table is too small to be able to read. Responsive tables change layout on smaller screens - sometimes some columns are hidden or columns are amalgamated, e.g. name and email address might be separate on large screens, but collapse down into one cell on small screens so the information is readable without having to scroll. <div> s are used to create the tables instead of <table> tags for a couple of reasons. If <table> tags are used then you need to override the browser default styles and layout before adding your own code, so in this case <div> tags save on a lot of boilerplate CSS. Additionally, older versions of IE don't allow you to override default table styles, so using <div> s also smooths cross-browser development. There's a pretty good overview of responsive tables on CSS-Tricks . Edit: I should point out that I'm not advocating this pattern - it falls into the divitis trap and isn't semantic - but this is why you'll find tables made from div s.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/277778", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7279/" ] }
278,281
I have a client who would like me to deliver the source code with a developed application binary. They originally said nothing about source code, but they recently said they need it. The contract is not finalized. They agreed to the work, did not sign, and then came back with this clause. The issue is: I have a code base I've created over the years and used as a template for most of the applications I write. It is far larger than the scope of the project. I also intend to use it for a product, so I really don't wish to provide it for a relatively small project. I'm guessing this is not the first time that this has happened in this industry. What is the best way to circumvent this issue? I'm guessing things like shared libraries could help.
The first thing to keep in mind is that source code has value separate from the binaries. It is perfectly reasonable to either refuse to sign a contract that requires source code delivery, or to insist on extra payments for source code delivery. Contracts are two-way documents. Do not let the other part dictate what is required just because they are "big companies" and "do this all the time". First, decide what you are willing to deliver and how you want to be compensated. Then take their contract to a lawyer and work out what needs to change. Then, you negotiate. Don't do what a lot of young people do when they start contracting. Don't just sign because it seems like they have lots of experience and you don't. That's a good way to get ripped off. Look into why they want the source. They may want it so they have the option of using another developer later on. Or they may want it just because they are afraid you might get hit by a bus and suddenly they'll be left with binaries that they cannot improve. If it is this second case, look into a Software Code Escrow Service . These services hold the source code in case you go bankrupt or otherwise are unable to maintain the software. This may satisfy both your desire to keep your code proprietary to service other customers and their desire not to be left holding the bag with an unmaintainable set of binaries if something bad happens.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278281", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/173553/" ] }
278,476
I am a junior developer working on writing an update for software that receives data from a third-party solution, stores it in a database, and then conditions the data for use by another third-party solution. Our software runs as a Windows service. Looking at the code from a previous version, I see this: static Object _workerLocker = new object(); static int _runningWorkers = 0; int MaxSimultaneousThreads = 5; foreach(int SomeObject in ListOfObjects) { lock (_workerLocker) { while (_runningWorkers >= MaxSimultaneousThreads) { Monitor.Wait(_workerLocker); } } // check to see if the service has been stopped. If yes, then exit if (this.IsRunning() == false) { break; } lock (_workerLocker) { _runningWorkers++; } ThreadPool.QueueUserWorkItem(SomeMethod, SomeObject); } The logic seems clear: Wait for room in the thread pool, make sure the service hasn't been stopped, then increment the thread counter and queue the work. _runningWorkers is decremented inside SomeMethod() inside a lock statement that then calls Monitor.Pulse(_workerLocker) . My question is: Is there any benefit in grouping all the code inside a single lock , like this: static Object _workerLocker = new object(); static int _runningWorkers = 0; int MaxSimultaneousThreads = 5; foreach (int SomeObject in ListOfObjects) { // Is doing all the work inside a single lock better? lock (_workerLocker) { // wait for room in ThreadPool while (_runningWorkers >= MaxSimultaneousThreads) { Monitor.Wait(_workerLocker); } // check to see if the service has been stopped. if (this.IsRunning()) { ThreadPool.QueueUserWorkItem(SomeMethod, SomeObject); _runningWorkers++; } else { break; } } } It seems like, it may cause a little bit more waiting for other threads, but then it seems like locking repeatedly in a single logical block would also be somewhat time-consuming. However, I'm new to multi-threading, so I'm assuming that there are other concerns here that I'm unaware of. The only other places where _workerLocker gets locked is in SomeMethod() , and only for the purpose of decrementing _runningWorkers , and then outside the foreach to wait for the number of _runningWorkers to go to zero before logging and returning. Thanks for any help. EDIT 4/8/15 Thanks to @delnan for the recommendation to use a semaphore. The code becomes: static int MaxSimultaneousThreads = 5; static Semaphore WorkerSem = new Semaphore(MaxSimultaneousThreads, MaxSimultaneousThreads); foreach (int SomeObject in ListOfObjects) { // wait for an available thread WorkerSem.WaitOne(); // check if the service has stopped if (this.IsRunning()) { ThreadPool.QueueUserWorkItem(SomeMethod, SomeObject); } else { break; } } WorkerSem.Release() is called inside SomeMethod() .
This is not a question of performance. It is first and foremost a question of correctness. If you have two lock statements, you can not guarantee atomicity for operations that are spread between them, or partially outside the lock statement. Tailored for the old version of your code, this means: Between the end of the while (_runningWorkers >= MaxSimultaneousThreads) and the _runningWorkers++ , anything at all may happen , because the code surrenders and re-acquires the lock in between. For example, thread A might acquire the lock for the first time, wait until there some other thread exits, and then break out of the loop and the lock . It is then preempted, and thread B enters the picture, also waiting for room in the thread pool. Because said other thread quit, there is room so it doesn't wait very long at all. Both thread A and thread B now go on in some order, each incrementing _runningWorkers and starting their work. Now, there are no data races as far as I can see, but logically it's wrong, since there are now more than MaxSimultaneousThreads workers running. The check is (occasionally) ineffective because the task of taking a slot in the thread pool is not atomic. This should concern you more than small optimizations around lock granularity! (Note that conversely, locking too early or for too long can easily lead to deadlocks.) The second snippet fixes this problem, as far as I can see. A less invasive change to fix the problem might be putting the ++_runningWorkers right after the while look, inside the first lock statement. Now, correctness aside, what about performance? This is hard to tell. Generally locking for a longer time ("coarsely") inhibits concurrency, but as you say, this needs to be balanced against the overhead from the additional synchronization of fine-grained locking. Generally the only solution is benchmarking and being aware that there are more options than "lock everything everywhere" and "lock only the bare minimum". There is a wealth of patterns and concurrency primitives and thread-safe data structures available. For example, this seems like the very application semaphores were invented for, so consider using one of those instead of this hand-rolled hand-locked counter.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278476", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/173789/" ] }
278,652
I've been writing a lot of ES6 code for io.js recently. There isn't much code in the wild to learn from, so I feel like I'm defining my own conventions as I go. My question is about when to use const vs let . I've been applying this rule: If possible, use const . Only use let if you know its value needs to change. (You can always go back and change a const to a let if it later turns out you need to change its value.) The main reason for this rule is it's easy to apply consistently. There are no grey areas. The thing is, when I apply this rule, in practice 95% of my declarations are const . And this looks weird to me. I'm only using let for things like i in a for loop, or occasionally for things like accumulated Fibonacci totals (which doesn't come up much in real life). I was surprised by this – it turns out 95% of the 'variables' in my ES5 code to date were for values that do not vary. But seeing const all over my code feels wrong somehow. So my question is: is it OK to be using const this much? Should I really be doing things like const foo = function () {...}; ? Or should I reserve const for those kind of situations where you're hard-coding a literal at the top of a module – the kind you do in full caps, like const MARGIN_WIDTH = 410; ?
My reply here is not javascript-specific. As a rule of thumb in any language that lets me do so in a semi-easy way I'd say always use const/final/readonly/whatever it is called in your language whenever possible. The reason is simple, it's much easier to reason about code when it is dead obvious what can change and what cannot change. And in addition to this, in many languages you can get tool support that tells you that you are doing something wrong when you accidentially assign to a variable that you've declared as const. Going back and changing a const to a let is dead simple. And going const by default makes you think twice before doing so. And this is in many cases a good thing. How many bugs have you seen that involved variables changing unexpectedly? I'd guess a lot. I know that the majority of bugs that I see involve unexpected state changes. You won't get rid of all of these bugs by liberally using const, but you will get rid of a lot of them! Also, many functional languages have immutable variables where all variables are const by default. Look at Erlang for example, or F#. Coding without assignment works perfectly in these languages and is one of the many reasons why people love functional programming. There is a lot to learn from these languages about managing state in order to become a better programmer. And it all starts with being extremely liberal with const! ;) It's just two more characters to write compared to let, so go ahead and const all the things!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278652", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30385/" ] }
278,654
When working on relatively small projects, but a software requirement specification document is necessary, is it a standard to include said document on version control systems or is it managed differently. Note I'm talking about standards
My reply here is not javascript-specific. As a rule of thumb in any language that lets me do so in a semi-easy way I'd say always use const/final/readonly/whatever it is called in your language whenever possible. The reason is simple, it's much easier to reason about code when it is dead obvious what can change and what cannot change. And in addition to this, in many languages you can get tool support that tells you that you are doing something wrong when you accidentially assign to a variable that you've declared as const. Going back and changing a const to a let is dead simple. And going const by default makes you think twice before doing so. And this is in many cases a good thing. How many bugs have you seen that involved variables changing unexpectedly? I'd guess a lot. I know that the majority of bugs that I see involve unexpected state changes. You won't get rid of all of these bugs by liberally using const, but you will get rid of a lot of them! Also, many functional languages have immutable variables where all variables are const by default. Look at Erlang for example, or F#. Coding without assignment works perfectly in these languages and is one of the many reasons why people love functional programming. There is a lot to learn from these languages about managing state in order to become a better programmer. And it all starts with being extremely liberal with const! ;) It's just two more characters to write compared to let, so go ahead and const all the things!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278654", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/136188/" ] }
278,686
I work on small team with other remote developers on a Rails application. We are starting to modify our git workflow. We have thought about a branching structure like below: (dev) -> (qa) -> (stag) -> (master) But some of the developers thought it might be less confusing for new developers who might automatically push to production on master. They thought instead to have everyone work on master and create a separate branch for production. (master) -> (qa) -> (stag) -> (prod) I was taught you want to keep master deployable and not use it as development and from previous places where I've worked master is always meant to be deployable for production. What would be some of the disadvantages of using a branching structure where master is actively used for development and a separate prod branch is what you use for deployments?
There are neither any advantages nor disadvantages to this approach. The reason I say this is simple: to Git, it makes no difference if you develop from master or release from master. You don't even need to release branches; you could tag an arbitrary commit and release that, instead. The real trouble here is one of process and procedure. The more senior devs that are concerned that doing it in one way will confuse the newer devs need to be prepared to invest the time to explain what the release model is and why it's that way. So long as everyone understands that master is for development, and some other arbitrary branch is for releases, and the work to maintain this is done , then there shouldn't be any problems with this approach.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278686", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
278,731
After some research, I understand that: Formal method contribute to the reliability and robustness of a design. (ref : Wikipedia - Formal method ) Unit testing ensures us of the absence of errors introduced by a developer (ref : Wikipedia - Unit testing ) Like I see here , a formal proof is just a mathematical calculation, based on a mathematical expression (boolean expression). A unit test is checked from assertions, so mainly on one or a set of boolean expressions. Moreover, I read (very quickly) a research's paper of unit test generation from formal proof. Is unit testing a form of formal method (here I think to a system like formal proof)? If it's totally separate concept what are the differences? The aim of this question is not to know if after running unit testing there still bugs. His aim is to know if there is a link between unit testing and formal method.
The two are different things, and in fact much more different in practice than they would be in theory. A formal correctness proof proves something about the behaviour of an algorithm . For instance, it might investigate the invariants applying to the data as it is transformed by a sorting algorithm, and prove that if the algorithm terminates, every element is larger than the previous one. This kind of proof can be rigorous, i.e. if it's done correctly the algorithm cannot be wrong in this respect. In practice, algorithms must be embodied in computer code, and it's usually infeasible to prove that a given bit of code accurately represents the algorithm that you want. (That would require formally proving the behaviour of the compiler, the standard library, the virtual machine, etc.) It gets a little easier the more similar the programing language is to the mathematical notation you've used in the formal proof, but not much. (The code running in central systems of the Space Shuttle was said to be almost a perfect mathematical notation itself, but not very pleasant to program in.) It's much more cost-effective to actually run the code on judiciously chosen inputs and verify that it produces the expected outputs. This has the disadvantage that you can never be certain that there isn't an error in it - it might behave for those input/output pairs you haven't tested (and there are usually more pairs than you can test, or you wouldn't need a computer program to do the work in the first place), or worse, the code might be subtly non-deterministic or context-dependent in way that your tests don't expose. But in practice, most errors that affect a computation can be exposed with intelligent checks, and if you keep a record of known errors and test cases verifying they cannot recur, the quality of code can generally be made good enough to be of business value. Certainly it's a better idea to run unit tests, integration tests and user acceptance tests and get something out the door that people will pay for than to conduct a lengthy, expensive formal proof that overlooks a subtle deviation between the written specification and the actual expectations of the customer. So in the real world, the two are almost completely distinct activities.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278731", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124835/" ] }
278,756
I'm currently in a team where it lacks of its resources. The dev't team had only one websphere server for testing and we could not deploy our project in our local desktop because the websphere is only installed in one unit. Hence, every one is waiting after another in testing their fixes and feature. I tried to ask the manager if we could have our own copy of websphere installed in our local desktop but the manager is questioning us why we need it. I said some of the reason, but I guess the manager is still in doubt. The question now is, in a development team, is having your own local application server installed in your desktop part of a best practices? how and why? what are the benefits? or what are the cons and pros?
The two are different things, and in fact much more different in practice than they would be in theory. A formal correctness proof proves something about the behaviour of an algorithm . For instance, it might investigate the invariants applying to the data as it is transformed by a sorting algorithm, and prove that if the algorithm terminates, every element is larger than the previous one. This kind of proof can be rigorous, i.e. if it's done correctly the algorithm cannot be wrong in this respect. In practice, algorithms must be embodied in computer code, and it's usually infeasible to prove that a given bit of code accurately represents the algorithm that you want. (That would require formally proving the behaviour of the compiler, the standard library, the virtual machine, etc.) It gets a little easier the more similar the programing language is to the mathematical notation you've used in the formal proof, but not much. (The code running in central systems of the Space Shuttle was said to be almost a perfect mathematical notation itself, but not very pleasant to program in.) It's much more cost-effective to actually run the code on judiciously chosen inputs and verify that it produces the expected outputs. This has the disadvantage that you can never be certain that there isn't an error in it - it might behave for those input/output pairs you haven't tested (and there are usually more pairs than you can test, or you wouldn't need a computer program to do the work in the first place), or worse, the code might be subtly non-deterministic or context-dependent in way that your tests don't expose. But in practice, most errors that affect a computation can be exposed with intelligent checks, and if you keep a record of known errors and test cases verifying they cannot recur, the quality of code can generally be made good enough to be of business value. Certainly it's a better idea to run unit tests, integration tests and user acceptance tests and get something out the door that people will pay for than to conduct a lengthy, expensive formal proof that overlooks a subtle deviation between the written specification and the actual expectations of the customer. So in the real world, the two are almost completely distinct activities.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278756", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/174181/" ] }
278,790
So I have been coming across many comments/posts/etc regarding creating makefiles directly, and how it is a silly thing to do in 2015. I am aware of tools such as CMake, and I actually use CMake quite often. The thing is, CMake is just creating the Makefile for you and helping to remove the tedium of doing it yourself. Of course it adds a lot of other great features... but its still a Makefile in the end. So my question is, is the 'obsolete' talk regarding make referring to the entire Make utility, or just the idea of manually writing your own Makefiles? I do not use an IDE for C/C++ development at all (just emacs), so I have always written Makefiles. If Make is considered outdated, what should a C/C++ dev be using to build small, personal projects?
The big difference is that CMake is a cross-platform meta-build system. A single CMake project can produce the usual Unix/Linux makefile, a Visual Studio project for Windows, an XCode project for Mac, and almost any other non-meta build system you might want to use or support. I wouldn't say using make directly or even manually editing makefiles is "obsolete", but those are things you probably shouldn't do unless you are working on Unix/Linux stuff that won't need porting to Windows, much like how you should not be editing Visual Studio project files directly if you ever want to port them to not-Windows. If you have any interest in portability, it's worth learning a meta-build system like Scons or CMake.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278790", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/174213/" ] }
278,837
I've started working at a new organization and one of the patterns I've been seeing in the database is duplicating fields to make writing queries easier for the business analysts. We're using Django and its ORM. In one case, we keep a MedicalRecordNumber object with a unique string identifying a patient in a certain context. We have Registration objects which track patients and have associated MedicalRecordNumbers , but rather than using a foreign key relationship, they duplicate the string so they can avoid writing a join ( not for performance reasons). This pattern is common throughout the database. For me the importance of a data model being clean is just so I can think about it well. Needless complexity is a waste of my limited cognitive processing time. It's a systematic problem. Not being comfortable writing joins is a rectifiable skills issue. I don't necessarily want to advocate going back and changing the schema, but I'd love to be able to convincingly articulate the problems with this type of duplication.
Your operational database should be highly normalized, to reduce anomalies . Your analytic database (warehouse) should be highly denormalized, to ease analysis. If you don't have a separate analytic database, you should make some highly denormalized [materialized] views. If you tell your senior business analysts / managers to do lots of joins for a simple analysis, well, you might get fired. Agile Data Warehouse Design is a good book See my quick n' dirty data warehouse tips here
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278837", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51196/" ] }
278,914
A group of friends and I have been working on a project for the past little while, and we wanted to invent a nice OOP way of representing a scenario specific to our product. Basically, we're working on a Touhou-style bullet hell game , and we wanted to make a system where we could easily represent any possible behaviour of bullet we could dream up. So that's exactly what we did; we made a really elegant architecture that allowed us to section off the behaviour of a bullet into different components that could be attached to bullet instances at will, kind of like Unity's component system. It worked nicely, it was easily extensible, it was flexible and covered all our bases, but there was a slight problem. Our application also involves a heavy amount of procedural generation, namely we procedurally generate the behaviours of the bullets. Why is this a problem? Well, our OOP solution to representing bullet behaviour, while elegant, is a little complicated to work with without a human. Humans are smart enough to think of solutions to problems that are both logical and clever. Procedural generation algorithms aren't that smart yet, and we've found it difficult to implement an AI that uses our OOP architecture to its fullest potential. Admittedly, that is a flaw of the architecture is that it's not intuitive in all situations. So to remedy this problem, we basically shoved all the behaviours offered by different components into the bullet class, so that everything we could ever imagine is offered directly in each bullet instance as opposed to in other associated component instances. This makes our procedural generation algorithms a little easier to work with, but now our bullet class is a huge god object . It is easily the largest class in the program so far with more than five times as much code as anything else. It's a bit of a pain to maintain as well. Is it okay that one of our classes turned into a god object, just to make it easier to work with another problem? In general, is it okay to have code smells in your code if it admits an easier solution to a different problem?
When building real-world programs, there is often a trade-off between staying pragmatic on one hand, and staying 100% clean on the other. If staying clean prohibits you to ship your product in time, then you are better off with a little bit of duct-tape to get the d***d thing out of the door. Said that, your description sounds different - it sounds you are not going to add a little bit of duct tape, it sounds like you are going to ruin your whole architecture because you did not look long and hard enough for a better solution. So instead of seeking for someone here on PSE who gives you a blessing on this, you might better ask a different question where you describe some details of the problems you have in-depth, and look if someone offers you an idea which avoids the god-class approach. Maybe the bullet class can be designed to be facade to a bunch of other classes, so the bullet class becomes smaller. Maybe the strategy pattern can help so the bullet can delegate different behaviours to different strategy objects. Maybe you need just an adapter between your bullet component and your procedural generator. But honestly, without knowing more details of your system, one can only guess around.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278914", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/122079/" ] }
278,949
It seems there is a certain amount of agreement that exception messages should contain useful details . Why is it that many common exceptions from system components do not contain useful details? A few examples: .NET List index access ArgumentOutOfRangeException does not tell me the index value that was tried and was invalid, nor does it tell me the allowed range. Basically all exception messages from the MSVC C++ standard library are utterly useless (in the same vein as above). Oracle exceptions in .NET, telling you (paraphrased) "TABLE OR VIEW not found", but not which one. So, to me it seems that for the most part exception messages do not contain sufficient details to be useful. Are my expectations out of line? Am I using exceptions wrong that I even notice this? Or maybe my impression is wrong: a majority of exceptions do actually provide useful details?
Exceptions do not contain useful details because the concept of exceptions has not matured yet enough within the software engineering discipline, so many programmers do not understand them fully, and therefore they do not treat them properly. Yes, IndexOutOfRangeException should contain the precise index that was out of range, as well as the range that was valid at the time that it was thrown, and it is contemptible on behalf of the creators of the .NET runtime that it doesn't. Yes, Oracle's table or view not found exception should contain the name of the table or view that was not found, and again, the fact that it does not is contemptible on behalf of whoever is responsible for this. To a large part, the confusion stems from the misguided original idea that exceptions should contain human-readable messages, which in turn stems from a lack of understanding of what exceptions are all about, so it is a vicious cycle. Since people think that the exception should contain a human-readable message, they believe that whatever information is carried by the exception should also be formatted into the human-readable message, and then they are either bored to write all the human-readable message-building code, or they are afraid that doing so might be divulging an inadvisable amount of information to whatever prying eyes might see the message. (The security issues mentioned by other answers.) But the truth of the matter is that they should not be worrying about that because the exception should not contain a human-readable message. Exceptions are things that only programmers should ever see and/or deal with. If there is ever a need to present failure information to a user, that has to be done at a very high level, in a sophisticated manner, and in the user's language, which, statistically speaking, is unlikely to be English. So, for us programmers, the "message" of the exception is the class name of the exception , and whatever other information is pertinent to the exception should be copied into (final/readonly) member variables of the exception object. Preferably, every single conceivable little bit of it. This way, no message needs to (or should) be generated, and therefore no prying eyes can see it. To address the concern expressed by Thomas Owens in a comment below: Yes, of course, at some level, you will create a log message regarding the exception. But you already see the problem with what you are saying: on one hand, an exception log message without a stack trace is useless, but on the other hand, you don't want to let the user see the entire exception stack trace. Again, our problem here is that our perspective is skewed by traditional practices. Log files have traditionally been in plain text, which may have been fine while our discipline was in its infancy, but perhaps not any more: if there is a security concern, then the log file must be binary and/or encrypted. Whether binary or plain text, the log file should be thought of as a stream into which the application serializes debug information. Such a stream would be for the programmers' eyes only, and the task of generating debugging information for an exception should be as simple as serializing the exception into the debug log stream. This way, by looking at the log you get to see the exception class name, (which, as I have already stated, is for all practical purposes "the message",) each of the exception member variables which describe everything which is pertinent-and-practical-to-include-in-a-log, and the entire stack trace. Note how the formatting of a human-readable exception message is conspicuously missing from this process. P.S. A few more of my thoughts on this subject can be found in this answer: How to write a good exception message P.P.S. It appears that a lot of people were being ticked off by my suggestion about binary log files, so I amended the answer once again to make it even more clear that what I am suggesting here is not that the log file should be binary, but that the log file may be binary, if need be.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/278949", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6559/" ] }
279,004
This problem is mainly focusing on the algorithm, maybe something abstract and more academic. The example is offering a thought, I wanna a generic way, so example is only used as to make us more clearly about your thoughts. Generally speaking, a loop can be converted to a recursive. e.g: for(int i=1;i<=100;++i){sum+=i;} And its related recursive is: int GetTotal(int number) { if (number==1) return 1; //The end number return number+GetTotal(number-1); //The inner recursive } And finally to simplify this, a tail-recursive is needed: int GetTotal (int number, int sum) { if(number==1) return sum; return GetTotal(number-1,sum+number); } However, most cases aren't so easy to answer and analyze. What I wanna know is: 1) Can we get a "general common way" to convert a loop (for/while……) to a recursive? And what kinds of things should we pay attention to while doing conversion? It would be better to write detailed info with some samples and your persudo theories as well as the conversion process. 2) "Recursive" has two forms: Linely recursive and Tail-Recursive. So which is better to convert? What "rule" should we master? 3) Sometimes we need to keep the "history" of recursive, this is easily be done in a loop statement: e.g: List<string> history = new List<string>(); int sum=0; for (int i=1;i<=100;++i) { if(i==1) history.Add(i.ToString()+"'s result is:1."); else { StringBuilder sub = new StringBuilder(); for(int j=1;j<=i;++j) { if(j==i) sbu.Append(j.ToString()); else { sub.Append(j.ToString()+"+"); } } sum +=i; sbu.Append("'s result is:"+sum+Environment.NewLine); } } The result below is: 1's result is 1. 1+2's result is 3. 1+2+3's result is 6 ………… However I think it's hard to keep the history in a recursive, because a recursive-based algorithm focuses on to getting the last result and do a call-back return. So all of these are done through the stack maintained by the programming language assigning memory in the form of stack automatically. And how we can "manually" take each of the "stack values" out and return multiple values through a recursive algorithm? And what about "from a recursive algorithm to a loop"? Can they be converted to each other (I think it should be done theoretically, but I want more accurate things to prove my thoughts) .
Actually you should break the function down first: A loop has a few parts: the header, and processing before the loop. May declare some new variables the condition, when to stop the loop. the actual loop body. It changes some of the header's variables and/or the parameters passed in. the tail; what happens after the loop and return result. Or to write it out: foo_iterative(params){ header while(condition){ loop_body } return tail } Using these blocks to make a recursive call is pretty straightforward: foo_recursive(params){ header return foo_recursion(params, header_vars) } foo_recursion(params, header_vars){ if(!condition){ return tail } loop_body return foo_recursion(params, modified_header_vars) } Et voilà; a tail recursive version of any loop. break s and continue s in the loop body will still have to be replaced with return tail and return foo_recursion(params, modified_header_vars) as needed but that is simple enough. Going the other way is more complicated; in part because there can be multiple recursive calls. This means that each time we pop a stack frame there can be multiple places where we need to continue. Also there may be variables that we need to save across the recursive call and the original parameters of the call. We can use a switch to work around that: bar_recurse(params){ if(baseCase){ finalize return } body1 bar_recurse(mod_params) body2 bar_recurse(mod_params) body3 } bar_iterative(params){ stack.push({init, params}) while(!stack.empty){ stackFrame = stack.pop() switch(stackFrame.resumPoint){ case init: if(baseCase){ finalize break; } body1 stack.push({resum1, params, variables}) stack.push({init, modified_params}) break; case resum1: body2 stack.push({resum2, params, variables}) stack.push({init, modified_params}) break; case resum2: body3 break; } } }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279004", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/106104/" ] }
279,117
as far as I understand it, most people seem to agree that private methods should not be tested directly, but rather through whatever public methods call them. I can see their point, but I have some problems with this when I try to follow the "Three Laws of TDD", and use the "Red - green - refactor" cycle. I think it's best explained by an example: Right now, I need a program that can read a file (containing tab-separated data) and filter out all columns that contain non-numerical data. I guess there's probably some simple tools available already to do this, but I decided to implement it from scratch myself, mostly because I figured it could be a nice and clean project for me to get some practice with TDD. So, first, I "put the red hat on", that is, I need a test that fails. I figured, I'll need a method that finds all the non-numerical fields in a line. So I write a simple test, of course it fails to compile immediately, so I start writing the function itself, and after a couple of cycles back and forth (red/green) I have a working function and a complete test. Next, I continue with a function, "gatherNonNumericColumns" that reads the file, one line at a time, and calls my "findNonNumericFields"-function on each line to gather up all the columns that eventually must be removed. A couple of red-green-cycles, and I'm done, having again, a working function and a complete test. Now, I figure I should refactor. Since my method "findNonNumericFields" was designed only because I figured I would need it when implementing "gatherNonNumericColumns", it seems to me that it would be reasonable to let "findNonNumericFields" become private. However, that would break my first tests, since they would no longer have access to the method they were testing. So, I end up with a private methods, and a suite of test that test it. Since so many people advice that private methods should not be tested, it feels like I've painted myself into a corner here. But where exactly did I fail? I gather I could have started out at a higher level, writing a test that tests what will eventually become my public method (that is, findAndFilterOutAllNonNumericalColumns), but that feels somewhat counter to the whole point of TDD (at least according to Uncle Bob): That you should switch constantly between writing tests and production code, and that at any point in time, all your tests worked within the last minute or so. Because if I start out by writing a test for a public method, there will be several minutes (or hours, or even days in very complex cases) before I get all the details in the private methods to work so that the test testing the public method passes. So, what to do? Is TDD (with the rapid red-green-refactor cycle) simply not compatible with private methods? Or is there a fault in my design?
The fact that your data-gathering methods are complex enough to merit tests and separate enough from your primary goal to be methods of their own rather than part of some loop points to the solution: make these methods not private, but members of some other class that provides gathering/filtering/tabulating functionality. Then you write tests for the dumb data-munging aspects of the helper class (e.g. "distinguishing numbers from characters") in one place, and tests for your primary goal (e.g. "getting the sales figures") in another place, and you don't have to repeat basic filtering tests in the tests for your normal business logic. Quite generally, if your class that does one thing contains extensive code for doing another thing that is required for, but separate from, its primary purpose, that code should live in another class and be called via public methods. It shouldn't be hidden in private corners of a class that only accidentally contains that code. This improves testability and understandability at the same time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279117", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/174643/" ] }
279,190
A colleague of mine is heavily pushing the BEM ( Block Element Modifier ) method for the CSS in a project he's helming, and I just cannot comprehend what makes it better than the LESS CSS we've been writing for years. He claims "higher performance", but I can't imagine how performance could even matter for the kinds of web apps we write in this code shop. We're not making Twitter or Facebook, here. I'd be very surprised if our highest use apps have more than 10000 hits a month , and most are well under 1000. He claims "readability" and "re-use", but LESS already does that much better , in my opinion. And I feel that BEM mangles the markup so badly with dozens of extra characters dedicated to these super-long class names that it radically reduces readability. He claims "nested CSS is an anti-pattern." What does "anti-pattern" even mean , and why is nested CSS bad? He claims that "everyone is moving toward using BEM, so we should too." My counter is "If everyone was jumping off a bridge, would you follow?" But he's still totally adamant about this. Could someone please explain in detail what makes BEM better than LESS? My colleague is completely failing to convince me, but I don't know if I have a choice but to follow. I'd really rather be able to appreciate BEM, than to grudgingly accept it.
A colleague of mine is heavily pushing the BEM (Block Element Modifier) method for the CSS in a project he's helming, and I just cannot comprehend what makes it better than the LESS CSS we've been writing for years. BEM is a CSS methodology. Others include OOCSS and SMACSS. These methodologies are used to write modular, extensible, reusable code that performs well at scale. LESS is a CSS preprocessor. Others include Sass and Stylus. These preprocessors are used to convert their respective sources into expanded CSS. BEM and LESS are not comparable as being "better" than one another, they are tools that serve different purposes. You wouldn't say that a screwdriver is better than a hammer, except when considering utility to solve a specific problem. He claims "higher performance"... Performance would need to be measured between a classical CSS style of: .widget .header and BEM style of: .widget__header but generally speaking, CSS selector performance is not a bottleneck, and does not need to be optimized. BEM "performance" is usually with regard to a developer's performance writing code. If the BEM methodology is used consistently and correctly, it is easy for groups of developers to simultaneously author distinct modules without style collisions. He claims "readability" and "re-use"... I don't know that I would tell a new developer that BEM is more readable. I can say that it provides some well-defined guidelines as to the meaning and structure of classes. Seeing a class like .foo--bar__baz Tells me that there is a foo block that is in the bar state, and contains a baz element. I would absolutely say that BEM is more reusable than a classical model. If two developers create blocks ( foo and bar ), and both of those blocks have headings, they can safely reuse their blocks in different contexts without worry of a naming collision. That's because, in a classical context, .foo .heading and .bar .heading would conflict, and introduce a specificity conflict that would need to be resolved, possibly on a case-by-case basis. In a BEM site, the classes would be .foo__heading and .bar__heading , which would never conflict. He claims "nested CSS is an anti-pattern." What does "anti-pattern" even mean, and why is nested CSS bad? An "anti-pattern" is a coding pattern that's easier for inexperienced developers to learn and use than a more appropriate alternative. As far as why nested CSS is bad: Nesting increases a selector's specificity. The higher the specificity, the more effort it takes to override. Inexperienced developers often worry that their CSS might affect multiple pages, so they use selectors like: #something .lorem .ipsum .dolor ul.sit li.amet a.more When an experienced developer would worry that their CSS might not affect multiple pages, so they use selectors like: .more He claims that "everyone is moving toward using BEM, so we should too."... That is a bandwagon fallacy , so ignore that as a bad argument. Don't fall trap to the fallacy fallacy , because a bad argument in support of BEM isn't a reason to believe that BEM can't be good. Could someone please explain in detail what makes BEM better than LESS? I covered this before, BEM & LESS are not comparable. Apples and oranges, etc. My colleague is completely failing to convince me, but I don't know if I have a choice but to follow. I recommend taking a look at OOCSS, SMACSS, and BEM, and weighing the pros and cons of each methodology... as a team . I use BEM because I like its strictness in format, and don't mind the ugly selectors, but I can't tell you what's right for you or for your team. Don't let one outspoken individual run the show. If you're not comfortable with BEM, be aware that there are other alternatives that might be easier for your team to support. You may need to defend your position with your coworker, but long-term it will likely have a positive effect on the outcome of your projects. I'd really rather be able to appreciate BEM, than to grudgingly accept it. I wrote an answer on StackOverflow that describes how BEM works . The important thing to understand is that your ideal selectors should have a specificity of 0-0-1-0 so that they are easy to override and extend. Choosing to use BEM doesn't mean you need to give up LESS! You can still use variables, you can still use @imports, and you can certainly continue to use nesting. The difference is you want the rendered output to become a single class, rather than a descendant chain. Where you might have had .widget { .heading { ... } } With BEM you can use: .widget { &__heading { ... } } Additionally, because BEM revolves around individual blocks, you can easily separate code into separate files. widget.less would contain styles for the .widget block, while component.less would contain styles for the .component block. This makes it much easier to find the source for any particular class, although you may still wish to be using source maps.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279190", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/174709/" ] }
279,316
I'm starting to learn Haskell . I'm very new to it, and I am just reading through a couple of the online books to get my head around its basic constructs. One of the 'memes' that people familiar with it have often talked about, is the whole "if it compiles, it will work*" thing - which I think is related to the strength of the type system. I'm trying to understand why exactly Haskell is better than other statically typed languages in this regard. Put another way, I assume in Java, you could do something heinous like bury ArrayList<String>() to contain something that really should be ArrayList<Animal>() . The heinous thing here is is that your string contains elephant, giraffe , etc, and if someone puts in Mercedes - your compiler won't help you. If I did do ArrayList<Animal>() then, at some later point in time, if I decide my program isn't really about animals, it's about vehicles, then I can change, say, a function that produces ArrayList<Animal> to produce ArrayList<Vehicle> and my IDE should tell me everywhere there is a compilation break. My assumption is that this is what people mean by a strong type system, but it's not obvious to me why Haskell's is better. Put another way, you can write good or bad Java, I assume you can do the same in Haskell (i.e stuff things into strings/ints that should really be first-class data types). I suspect I'm missing something important/basic. I would very happy to be shown the error of my ways!
Here's an unordered list of type system features available in Haskell and either unavailable or less nice in Java (to my knowledge, which is admittedly weak w.r.t. Java) Safety . Haskell's types have pretty good "type safety" properties. This is pretty specific, but it essentially means that values at some type cannot wantonly transform into another type. This is sometimes at odds with mutability (see OCaml's value restriction ) Algebraic Data Types . Types in Haskell have essentially the same structure as high school mathematics. This is outrageously simple and consistent, yet, as it turns out, as powerful as you could possibly want. It's simply a great foundation for a type system. Datatype-generic programming . This is not the same as generic types (see generalization ). Instead, due to the simplicity of the type structure as noted before it's relatively easy to write code which operates generically over that structure. Later I talk about how something like Eq uality might be auto-derived for a user-defined type by a Haskell compiler. Essentially the way that it does this is walk over the common, simple structure underlying any user-defined type and match it up between values—a very natural form of structural equality. Mutually recursive types . This is just an essential component of writing non-trivial types. Nested types . This allows you to define recursive types over variables which recurse at different types. For instance, one type of balanced trees is data Bt a = Here a | There (Bt (a, a)) . Think carefully about the valid values of Bt a and notice how that type works. It's tricky! Generalization . This is almost too silly to not have in a type system (ahem, looking at you, Go). It's important to have notions of type variables and the ability to talk about code which is independent of the choice of that variable. Hindley Milner is a type system which is derived from System F. Haskell's type system is an elaboration of HM typing and System F is essentially the hearth of generalization. What I mean to say is that Haskell has a very good generalization story. Abstract types . Haskell's story here is not great but also not non-existent. It's possible to write types which have a public interface but a private implementation. This allows us to both admit changes to the implementation code at a later time and, importantly since it's the basis of all operation in Haskell, write "magic" types which have well-defined interfaces such as IO . Java probably actually has a nicer abstract type story, to be honest, but I don't think until Interfaces became more popular was that genuinely true. Parametricity . Haskell values do not have any universal operations. Java violates this with things like reference equality and hashing and even more flagrantly with coercions. What this means is that you get free theorems about types which allow you to know the meaning of an operation or value to a remarkable degree entirely from its type---certain types are such that there can only be a very small number of inhabitants. Higher-kinded types show up all the type when encoding trickier things. Functor/Applicative/Monad, Foldable/Traversable, the entire mtl effect typing system, generalized functor fixpoints. The list goes on and on. There are a lot of things which are best expressed at higher kinds and relatively few type systems even allow the user to talk about these things. Type classes . If you think of type systems as logics—which is useful—then you often are demanded to prove things. In many cases this is essentially line noise: there may be only one right answer and it's a waste of time and effort for the programmer to state this. Typeclasses are a way for Haskell to generate the proofs for you. In more concrete terms, this lets you solve simple "type equation systems" like "At which type are we intending to (+) things together? Oh, Integer , ok! Let's inline the right code now!". At more complex systems you might be establishing more interesting constraints. Constraint calculus . Constraints in Haskell—which are the mechanism for reaching into the typeclass prolog system—are structurally typed. This gives a very simple form of subtyping relationship which lets you assemble complex constraints from simpler ones. The entire mtl library is based on this idea. Deriving . In order to drive the canonicity of the typeclass system it's necessary to write a lot of often trivial code to describe the constraints user-defined types must instantiate. Do to the very normal structure of Haskell types, it is often possible to ask the compiler to do this boilerplate for you. Type class prolog . The Haskell type class solver—the system which is generating those "proofs" I referred to earlier—is essentially a crippled form of Prolog with nicer semantic properties. This means you can encode really hairy things in type prolog and expect them to be handled all at compile time. A good example might be solving for a proof that two heterogenous lists are equivalent if you forget about the order—they're equivalent heterogenous "sets". Multi-parameter type classes and functional dependencies . These are just massively useful refinements to base typeclass prolog. If you know Prolog, you can imagine how much the expressive power increases when you can write predicates of more than one variable. Pretty good inference . Languages based on Hindley Milner type systems have pretty good inference. HM itself has complete inference which means that you never need to write a type variable. Haskell 98, the simplest form of Haskell, already throws that out in some very rare circumstances. Generally, modern Haskell has been an experiment in slowly reducing the space of complete inference while adding more power to HM and seeing when users complain. People very rarely complain—Haskell's inference is pretty good. Very, very, very weak subtyping only . I mentioned earlier that the constraint system from typeclass prolog has a notion of structural subtyping. That is the only form of subtyping in Haskell . Subtyping is terrible for reasoning and inference. It makes each of those problems significantly harder (a system of inequalities instead of a system of equalities). It's also really easy to misunderstand (Is subclassing the same as subtyping? Of course not! But people very frequently confuse that and many languages aid in that confusion! How did we end up here? I suppose nobody ever examines the LSP.) Note recently (early 2017) Steven Dolan published his thesis on MLsub , a variant of ML and Hindley-Milner type inference which has a very nice subtyping story ( see also ). This doesn't obviate what I've written above—most subtyping systems are broken and have bad inference—but does suggest that we just today may have discovered some promising ways to have complete inference and subtyping play together nicely. Now, to be totally clear, Java's notions of subtyping are in no way able to take advantage of Dolan's algorithms and systems. It requires a rethinking of what subtyping means. Higher rank types . I talked about generalization earlier, but more than just mere generalization it's useful to be able to talk about types which have generalized variables within them . For instance, a mapping between higher order structures which is oblivious (see parametricity ) to what those structures "contain" has a type like (forall a. f a -> g a) . In straight HM you can write a function at this type, but with higher-rank types you demand such a function as an argument like so: mapFree :: (forall a . f a -> g a) -> Free f -> Free g . Notice that the a variable is bound only within the argument. This means that the definer of the function mapFree gets to decide what a is instantiated at when they use it, not the user of mapFree . Existential types . While higher-rank types allow us to talk about universal quantification, existential types let us talk about existential quantification: the idea that there merely exists some unknown type satisfying some equations. This ends up being useful and to go on for longer about it would take a long while. Type families . Sometimes the typeclass mechanisms are inconvenient since we don't always think in Prolog. Type families let us write straight functional relationships between types. Closed type families . Type families are by default open which is annoying because it means that while you can extend them at any time you cannot "invert" them with any hope of success. This is because you cannot prove injectiveness , but with closed type families you can. Kind indexed types and type promotion . I'm getting really exotic at this point, but these have practical use from time to time. If you'd like to write a type of handles which are either open or closed then you can do that very nicely. Notice in the following snippet that State is a very simple algebraic type which had its values promoted into the type-level as well. Then, subsequently, we can talk about type constructors like Handle as taking arguments at specific kinds like State . It's confusing to understand all the details, but also so very right. data State = Open | Closed data Handle :: State -> * -> * where OpenHandle :: {- something -} -> Handle Open a ClosedHandle :: {- something -} -> Handle Closed a Runtime type representations that work . Java is notorious for having type erasure and having that feature rain on some people's parades. Type erasure is the right way to go, however, as if you have a function getRepr :: a -> TypeRepr then you at the very least violate parametricity. What's worse is that if that's a user-generated function which is used to trigger unsafe coercions at runtime... then you've got a massive safety concern . Haskell's Typeable system allows the creation of a safe coerce :: (Typeable a, Typeable b) => a -> Maybe b . This system relies on Typeable being implemented in the compiler (and not userland) and also could not be given such nice semantics without Haskell's typeclass mechanism and the laws it is guaranteed to follow. More than just these however the value of Haskell's type system also relates to how the types describe the language. Here are a few features of Haskell which drive value through the type system. Purity . Haskell allows no side effects for a very, very, very wide definition of "side effect". This forces you to put more information into types since types govern inputs and outputs and without side effects everything must be accounted for in the inputs and outputs. IO . Subsequently, Haskell needed a way to talk about side effects—since any real program must include some—so a combination of typeclasses, higher kinded types, and abstract types gave rise to the notion of using a particular, super-special type called IO a to represent side-effecting computations which result in values of type a . This is the foundation of a very nice effect system embedded inside of a pure language. Lack of null . Everyone knows that null is the billion dollar mistake of modern programming languages. Algebraic types, in particular the ability to just append a "does not exist" state onto types you have by transforming a type A into the type Maybe A , completely mitigate the problem of null . Polymorphic recursion . This lets you define recursive functions which generalize type variables despite using them at different types in each recursive call in their own generalization. This is difficult to talk about, but especially useful for talking about nested types. Look back to the Bt a type from before and try to write a function to compute its size: size :: Bt a -> Int . It'll look a bit like size (Here a) = 1 and size (There bt) = 2 * size bt . Operationally that isn't too complex, but notice that the recursive call to size in the last equation occurs at a different type , yet the overall definition has a nice generalized type size :: Bt a -> Int . Note that this is a feature which breaks total inference, but if you provide a type signature then Haskell will allow it. I could keep going, but this list ought to get you started-and-then-some.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279316", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80091/" ] }
279,609
I'm asking this question because I am confused about one aspect regarding big O notation. I am using the book, Data Structures and Abstractions with Java by Frank Carrano. In the chapter on the "Efficiency of Algorithms" he shows the following algorithm: int sum = 0, i = 1, j = 1 for (i = 1 to n) { for (j = 1 to i) sum = sum + 1 } He initially describes this algorithm as having a growth rate of (n 2 + n) / 2 . Which looking at it seems intuitive. However, it is then stated that (n 2 + n) / 2 behaves like n 2 when n is large. In the same paragraph he states (n 2 + n) / 2 also behaves much like n 2 / 2 . He uses this to classify the above algorithm as O(n 2 ) . I get that (n 2 + n) / 2 is similar to n 2 / 2 because percentage wise, n makes little difference. What I do not get is why (n 2 + n) / 2 and n 2 are similar, when n is large. For example, if n = 1,000,000 : (n^2 + n) / 2 = 500000500000 (5.000005e+11) (n^2) / 2 = 500000000000 (5e+11) (n^2) = 1000000000000 (1e+12) That last one is not similar at all. In fact, quite obviously, it's twice as much as the middle one. So how can Frank Carrano say they are similar? Also, how is the algorithm classified as O(n 2 ) . Looking at that inner loop I would say it was n 2 + n / 2
When calculating the Big-O complexity of an algorithm, the thing being shown is the factor that gives the largest contribution to the increase in execution time if the number of elements that you run the algorithm over increases. If you have an algorithm with a complexity of (n^2 + n)/2 and you double the number of elements, then the constant 2 does not affect the increase in the execution time, the term n causes a doubling in the execution time and the term n^2 causes a four-fold increase in execution time. As the n^2 term has the largest contribution, the Big-O complexity is O(n^2) .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279609", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/118638/" ] }
279,690
In Log4J, Slf4J and a couple other logging frameworks in Java, you have two "developper" level for logging: DEBUG TRACE I understand what DEBUG does, because the explanation is clear: The DEBUG Level designates fine-grained informational events that are most useful to debug an application. But the TRACE level is not very specific about its use case: The TRACE Level designates finer-grained informational events than the DEBUG (Source: the log4J JavaDoc ) This does not tell me how or when to use TRACE. Interestingly, this is not a severity level defined in the syslog standard . Googling for the difference between TRACE and DEBUG only seem to return "use DEBUG, oh, and there is TRACE too". I couldn't find a specific use case for the TRACE level. The best I could find was this old wiki page debating the merit of the existence of the level. This, as an architect, raises a lot of flags and questions in my head. If a young developer asked me to add TRACE to my architecture, I would bombard him with questions: What are some examples of information that should be logged with TRACE and not with DEBUG? What specific problem do I solve by logging that information? In those examples, what are the properties of the logged information that clearly discriminate between logging at the TRACE level rather than the DEBUG level? Why must that information go through the log infrastructure? What are the benefits of persisting that information in a logs journals rather than just using System.out.println ? Why is it better to use log for this rather than a debugger? What would be a canonical example of logging at the TRACE level? What are the specific gains that have been made by logging at the TRACE level instead of DEBUG in the example? Why are those gains important? In reverse: What problems did I avoid by logging it at TRACE instead of DEBUG? How else could I solve those problems? Why is logging at the TRACE level better than those other solutions? Should TRACE level log statement be left in the production code? Why? But given that it is present in most major framework, I am guessing it is useful for something? So... what is TRACE for, and what distinguishes it from DEBUG?
What are example of information that should be logged with TRACE and not with DEBUG? If I have an algorithm that goes through a bunch of steps, trace level will print info about each of those steps at the finest level. Things like the literal inputs and outputs of every step. In general, trace will include all debug (just like debug includes all warnings and errors). What specific problem do I solve by logging that information? You need to debug something that outputs way too much data to log outside of a specific build when you're targeting that particular thing and do not care about errors or other logging info (since the volume of trace info will obscure them). In some loggers, you will turn a certain module up to trace level only. In those examples, what are the properties of the logged information that clearly discriminate between logging at the TRACE level rather than the DEBUG level? In general, trace level logging cannot be on for sustained periods because it degrades the performance of the application greatly, and/or creates an abundance of log data that is unsustainable due to disk/bandwidth constraints. Debug level logging can usually be on for a longer period without making the app unusable. Why does that information must go through the log infrastructure? It doesn't have to. Some frameworks have a separate tracelogger. Usually it ends up in the logs since traceloggers and normal loggers have similar needs with regards to writing to disk/network, error handling, log rotation, etc. Why is it better to use log for this rather than a debugger? Because the debugger might not be able to attach to the problem machine. You might not know enough to know where to even set breakpoints, or to step through code. You might not be able to reliably reproduce the error in a debugger, so use the logs to catch it "if it happens". But they're just names. Like any other label, they're just names people put on things, and will usually mean different things to different people. And the label itself is less important than the meaty bits that the label refers to.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279690", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69624/" ] }
279,713
I've seen some applications that are basically application software that run local to the system (so they don't have communicate much over the network). These applications seem to depend on database servers in order to store their data. An example of an application is Amarok (a popular music player on Linux). I don't know if they still do this, but I remember there was a time where installing Amarok meant that you had to install a MySQL server and have it running in the background all the time. What is the advantage of using a server for local storage compared to using a smaller embedded SQL solution like sqlite? I'm talking about application software in general, not necessarily amarok (that was just an example). Are there any situations where using a database server makes sense compared to an embedded database?
SQLite offers a pretty good rundown of when to use it or not vs alternatives: https://www.sqlite.org/whentouse.html This summary line captures the SQLite use-case extremely well in my experience: SQLite does not compete with client/server databases. SQLite competes with fopen(). The article expands at length on this point. It also has a section titled "Situations Where A Client/Server RDBMS May Work Better". In a nutshell, they are: Client/Server Applications : multiple users over a network. High-volume Websites : either write intensive or read intensive enough to require sharding. Very large datasets : larger than can be reasonably stored on one disk. High Concurrency : in particular concurrent writes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279713", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/124792/" ] }
279,858
It is my understanding that any general-purpose programming language can be used for server-side development of a website. Am I right in thinking that a server just needs some kind of interface such as CGI to make the server and the programming language work together? If so then why are some programming languages (such as php) more popular than others?
In the early days of the web, CGI was indeed the only (practical) way to have dynamic content (you could do named pipes of files -- and those were used in days before cgi, but that wasn't practical at all). CGI works by sticking a bunch of information in the environment of the process that is forked and then exec'ed (and possibly some in stdin) and then takes what comes out of stdout and spits that back to the requestor. This doesn't care one bit about what the implementation language is. Indeed, I wrote my early CGIs back in the day in C or C++. It was kind of painful. I later learned some perl in the early 90s and that was much less painful. This works, up to a point. The problem is scale. Each CGI request is a fork and exec of a process. Thousands of requests means thousands of processes. That really doesn't work well. The solution to this is to remove forking and execing by either moving it into a thread in the web server itself, or dispatching the request to another process that handles the request without needing to fork and exec. mod_perl is one such tool to do this (a plugin moving perl into apache). Php (late 90s) also did this with implementing the language as a plugin in the web server itself rather than something that was forked and exceed. This became quite popular as it was perl-like (which was the early dominant web programming language) and could outperform perl cgis. There is still quite a bit of momentum from this period of time in the mid-90s -- before the more enterprise-grade application servers started to take hold with more formalized languages behind them. If you dig around, you can find a lot of failed attempts in the late 90s to early 2000s too -- languages and frameworks that just didn't stick. This brings us to the application servers where internal threads are spawned (or other approaches -- this isn't the case for everything) to handle requests rather than entire new processes -- which can help with scale. As an external process this could be seen with FastCGI and then later became prevalent with other application servers. Note that with this the line between application server and web server got a bit blurry -- many application servers could double as web servers, though weren't optimized for handling static file IO in the way that traditional web servers are. The generic application server has also paved the way to solutions where instead of a generic application server, you have the application itself either running an embedded web server or otherwise being the entire deployment. In such situations one doesn't deploy a web application on an application server - it just is running itself and handling requests. Again, the goal of this model is to avoid the heavy price of launching new instances of the application and instead handle the requests inside the application with much lighter weight threads or similar approaches. Here's the thing though -- all the solutions are deficient in some way, shape, or form. CGI, while easy has serious problems with scale. Plugins in the web servers get bound into the web server itself (apache vs nginx vs IIS vs ...) and lose the common functionality of the language. Microsoft has its own parade of technologies it would like to promote. And if you know one language, wouldn't you rather keep programming in it rather than have different languages in different parts of the stack (javascript in the client and Node.js)? And so, you've got today. Some people work in a Java stack (with scala and clojure becoming not uncommon). Others in a C# stack. Others in a JavaScript stack. There's quite a bit of php stacks out there. Lots of python. You can still find some perl stacks out there (and if you look at some low volume sites, you'll still find CGIs). With cloud computing, Google has also promoted Go as a viable server side web language. Each has its advantages, disadvantages, its frameworks and its servers. The relative popularity of these ebbs and flows as technologies around them change. They do different things well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279858", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/175512/" ] }
279,898
I have a spec from a client for an implementation of a method in a module: // getGenres(): // Returns a promise. When it resolves, it returns an array. If given an array of genres, ['comedy', 'drama', 'action'] Here is a skeleton method with a promise: MovieLibrary.getGenres = function() { var promise = new Promise(function(resolve, reject) { /* missing implementation */ }); return promise; }; Can the promise be made to return the data found in the genres? Is there a better way to achieve the spec description?
It sounds like you aren't understanding how promises are used. You return a promise. Then, later when your code resolves the promise, it resolves it with a result and that result is passed to the .then() handler attached to the promise: MovieLibrary.getGenres = function() { var promise = new Promise(function(resolve, reject) { /* missing implementation */ resolve(result); }); return promise; }; MovieLibrary.getGenres().then(function(result) { // you can access the result from the promise here });
{ "source": [ "https://softwareengineering.stackexchange.com/questions/279898", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/164783/" ] }
280,097
When using a language that supports named and optional arguments, does the builder pattern no longer have a practical use? Builder: new Builder(requiredA, requiredB).setOptionalA("optional").Build(); Optional/named arguments: new Object(requiredA, requiredB, optionalA: "optional");
Builders are most useful when your object needs a lot of arguments/dependencies to be useful, or you want to allow many different ways of constructing the object. Off the top of my head, I can imagine someone might want to "build" objects in a 3D game like this: // Just ignore the fact that this hypothetical god class is coupled to everything ever new ObjectBuilder(x, y, z).importBlenderMesh("./meshes/foo") .syncWithOtherPlayers(serverIP) .compileShaders("./shaders/foo.vert", "./shaders/foo.frag") .makeDestructibleRigidBody(health, weight) ... I would argue this example is more readable with the builder methods I made up just now than it would be with optional parameters: new Object(x, y, z, meshType: MESH.BLENDER, meshPath: "./meshes/foo", serverToSyncWith: serverIP, vertexShader: "./shaders/foo.vert", physicsType: PHYSICS_ENGINE.RIGID_DESTRUCTIBLE, health: health, weight: weight) ... In particular, the information implied by the builder method names has to get replaced by yet more parameters, and it's much easier to forget about one parameter in a group of closely related parameters. In fact, the fragment shader is missing, but you wouldn't notice that unless you knew to look for it. Of course, if your object only takes one to five arguments to construct, there's no need to get the builder pattern involved, whether or not you have named/optional parameters.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/280097", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/139113/" ] }
280,103
I have been developing in Groovy for a little while now and I'm wondering how often I should be using the dynamic casting def ? A co-worker of mine believes we should use it always as it helps Groovy in some way I don't understand. Currently, when declaring method return types and arguments, I like to deliberately state which objects should be taken in and spit out (for code readability and I come from a Java background it makes sense to me) example: String doSomething(String something){ //code } // vs def doSomething(def somthing){ //code } // vs def doSomething(somthing){ // code } So I guess my question is it just a preference of when to use def or is there a real advantage to using it all the time? (I added the last example because I felt it fits in with the question as a viable option for Groovy)
As good programming (even scripting) practice, always consider specifying a definite (though not necessarily concrete) type for a variable. Use def only if there's no definite type applicable to the variable. Since the OP knows Java, it's no different from specifying a type of Object (though there seems to be a minor difference ). The answer to this question then won't be different from answering a question like: "why not always use the Object type in Java?" Being as definite about types as possible reduces the chances of bugs, and even serves as self-documentation. Whereas, if one is deliberately implementing a dynamic logic, then using def might make a lot of sense. That's in fact one of the biggest strengths of Groovy; the program can be as dynamically or statically typed as one needs it to be! Just don't let laziness be the reason to use def ;-) E.g. this method makes sense with a definite argument type and return type: // def val or Object val, opens up the possibility // of the caller sending a non-numeric value Number half(Number val) { val/2 } while this method makes sense with type def // I'd let them pass an argument of any type; // type `Object` makes sense too def getIdProperty(def val) { if(val?.hasProperty('id')) { // I don't know the type of this returned // value and I don't really need to care return val.id } else { throw new IllegalArgumentException("Argument doesn't have an 'id' property") } }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/280103", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/136318/" ] }
280,257
I can't understand the reasoning for making the claims/payload of a JWT publicly visible after base64 decoding it. Why? It seems like it'd be much more useful to have it encrypted with the secret. Can someone explain why, or in what situation, keeping this data public is useful?
You choose not to encrypt the payload for the same reasons that you choose not to encrypt anything else: the cost (however small it is) exceeds the benefit, and a lot of data simply doesn't need to be secured that way. What you mostly need protection against is people tampering with the data so that the wrong record gets updated, or someone's checking account gets money in it that it's not supposed to have. The JSON Web Token's signature accomplishes that, because changing any part of the header/payload/signature combination invalidates the packet. Note that you can still secure the packets at the Transport Layer by using SSL.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/280257", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/176022/" ] }
280,283
I am currently in a project where one of my developer colleagues constantly refactors stuff on every ticket he's doing. We are using agile methodologies. I know that refactoring is a good thing to do while solving some issues, keeps the codebase nice and clean, but my question is, what's the limit, how many files should one be touching at max? Can too much refactoring bite us on the long term? Sometimes it feels like he could be introducing bugs (although the system is heavily tested) and that it makes the review process quite cumbersome; reviewing changes in 60-70 files when the original change should have affected only about 10-20. On the project the tickets we are getting are quite small, usually no more than 10-20 files need to be touched, and this developer is averaging 50 or so. It feels wrong, and I haven't found a way to convince him that it's not. He says everything is tested, so all changes only improve the codebase. Finally, I am assuming that this refactoring could also be slowing him down, cause he just changes so many damn stuff all the time ! :D Any thoughts ? My personal view is that refactoring should be done, but it should usually be limited to the classes that are changing for the ticket anyway, and should a need for a bigger refactoring occur, it should be a ticket itself. In some cases, I find myself not even changing small stuff, cause "I've refactored enough for this ticket as it is.", it feels like when I reach the refactoring limit I've got set in my mind, I just stop making any non-necessary changes to the codebase and only focus on wrapping up the ticket.. Edit: I've come to realize from reading the responses and similar questions on programmers stackexchange (that I just discovered, what a lovely place !) that my main problem is that what I described is a single commit. I need to tell everyone that that are free to introduce refactoring but make sure they keep it in separate commits to ease the reviewing process. Edit: Another aspect that I have found to be problematic is merge conflicts. These refactorings end up costing time to other members working on same pieces of code every now and then. Nothing substantial, but if everyone on the team made so much refactoring we would surely have many more merge conflicts. Not saying that this is the end of the world, just putting it on the table.
I wouldn't focus on the quantity, but rather on substance. The number of affected files during a group of refactoring operations is as irrelevant as the LOC you write per day. An extreme example is the renaming of methods to follow a convention. It may affect thousands of files, but is barely more important than a refactoring focused on two files which radically change their design. By focusing on what are the group of refactoring operations improving rather than how much files or LOC are affected, you also reduce the complexity of the problem you are currently encountering. During the code review, look at the overall impact of those operations. Do they improve the code base? How much? Consider refactoring operations individually. This will help your team determining precisely what is considered useful , and what should be avoided. For instance, imagine that during a code review, you notice that your colleague made the following refactoring operations: Split methods which were doing too much, Introduced abstract factory pattern, Replaced a few if s by inheritance, simplifying the logic, Replaced a few inheritance occurrences by if s, reducing the number of classes and LOC and making the code easier to read, Rewrote the method findSparse to be more readable, Renamed a few methods to show that those are actually getters and setters, Replaced a singleton and introduced Dependency Injection: the code is now much easier to test. Rather than telling that six changes affected 28 files and should be ignored, since they are above the threshold of 25 files, your team may rather consider that: Refactoring operations 1 and 7 are totally worth it. Operations 2, 3 and 4 are interesting, but cost much time. They shouldn't be done if the project is under time pressure, but should be otherwise. Operation 6 is cosmetic and has a risk of introducing bugs on the part of the project which uses Reflection (explaining the risks which outweigh the benefits is essential). This operation should be postponed until all the team is ready to undertake the change. Operation 5 is not improving anything: the author of the refactoring just rewrote the code to match his own style. Most team members agree that this doesn't make code more readable, and some claim that they actually find the original version easier to understand. Similar operations should be avoided in the future. By focusing on individual operations, you make it very easy for your coworker to understand the team's point of view, what is welcome, and what is not. By focusing on quantity, you make it more “I don't like your way of working” nonconstructive criticism. Note that: If commits are granular enough (and they should be; for the refactoring listed above, I will expect at least seven commits, and the eighth one implementing the feature itself; ideally, point 3 and point 4 should also be subject to multiple commits, leading to more than ten commits overall ), rolling back a commit shouldn't cause too much trouble. If commits are sparse and monolithic (that is, one or two commits for the refactoring operations listed above), there might be a more serious problem in your team. Try to explain them that one commit should correspond to one change, as small as it could be while remaining independent of other changes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/280283", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/176070/" ] }