source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
185,925
I am a Software developer who works on J2SE (core java). Often during our code reviews we are asked to reduce the number of lines in our code. It's not about removing redundant code, it's about following a style that focuses on doing the same things with fewer lines in the code, while I believe in having clarity in code even if it means increasing the number of lines. What do you think is the right way of doing things? If LOC (lines of code) is a small number, how does it affect the code? If LOC is a larger number, how does it affect the code? example from the website : "javaranch" - public static void happyBirthday(int age) { if ((age == 16) || (age == 21) || ((age > 21) && (((age % 10) == 0) || ((age % 25) == 0)))) { System.out.println("Super special party, this year!"); } else { System.out.println("One year older. Again."); } } VS public static void happyBirthday(int age) { boolean sweet_sixteen = (age == 16); boolean majority = (age == 21); boolean adult = (age > 21); boolean decade = (age % 10) == 0; boolean quarter = (age % 25) == 0; if (sweet_sixteen || majority || (adult && (decade || quarter))) { System.out.println("Super special party, this year!"); } else { System.out.println("One year older. Again."); } }
The problem with measurements, no matter how well intended they are, is the very act of measuring the item makes it important, and the corollary, the act of not measuring an item makes it unimportant. It is absolutely essential to measure what is important, and not measure what is unimportant. Measuring SLOC (Which is effectively what your reviews are doing), makes SLOC important.... Is SLOC important? - absolutely not, never has been (Outside Obfuscated programming contests ), never will be in a commercial organization. Ask yourself one simple question - how does "Reduce the SLOC of this routine" make anyones code better. What is probably happening in this case is SLOC is being used as a naive way to measure complexity. What you must avoid at all costs is counting the easy to count beans - objective measures such as SLOC, instead of counting the important, but hard to count ones - e.g. Readability, complexity etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185925", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76637/" ] }
185,944
Before I start this question I know the java 'goto' is a big no go. So I've been writing a program and I have some indented Loops and statements and I need to BREAK multiple on command. Rather than having a load of boolean variables and if(!<booleanName>) BREAK; statements throughout these loops and statements what is everyone's opinions on using labels to break them using the BREAK <label> statement? e.g. for(...) { indented_example_loops: // Label name for(...) { for(...) { if(match) break indented_example_loops; // ^ Break statement to break the 2 for loops at once } } } Perfectly okay? Okay to do occasionally? Completely avoid? or should i go to a corner and call the Devil to take my soul?
It's all about readability and ease of understanding. The easiest to understand loops are those with simple end conditions and no breaks at all. So if you can write your loops like that, do it. while (someCondition) { // do stuff } Next comes the loop with a simple end condition and a single break: while (someCondition) { // do stuff if (otherCondition) { break; } // maybe some more stuff } Everything that's more complex than this is a potential problem. A break with a label is always tricky, because you'll have to hunt for the label to find out which statement/block the break escapes from (i.e. the break will continue after the labeled block/statement). The worst kind (the one you almost always want to avoid) is a nested loop where you break somewhere in the middle . while (someCondition) { myLabel: for (Foo foo : getAllFoos()) { for (Bar bar : foo.getBars()) { // do stuff if (someCondition(bar)) { break myLabel; // BAD! } // some more stuff } } } The problem here is that the actual flow of processing becomes really hard to follow. If you start having code like this, then you should definitely refactor it. A common refactoring is to extract one (or more) nested loops into a method and turning the break into a return . This has the added benefit of giving a name to that piece of the code. If the name is chosen well, then that will help greatly in understanding the code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/185944", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80048/" ] }
186,269
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer. I work for a small company, in which few people are developers, others are QA/Test and 1 is a Manager. I joined this company 1.5 years ago. 3 senior developers have 8+ years of experience. These are the observations which I made about the team lead. (considering me as a fresher with less experience compared to them in all aspects ) They never discuss 1:1 or they never consider the junior suggestion (I agree that it's up to them, whether they accept it or not, at least they should consider an opinion). As senior team leader they can try to refactor the codebase with new technologies ( including the factor of rolling out new technologies is possible and other developer and infrastructure also ready), but these team leader feel less in-secure to work with new technologies, as they are not up to date. (reason I am telling, they don't know what current programming trend, *(such as popular open source projects like modernizr, bootstrap and many others). In our codebase more than 10000+ lines are repeated, so I told them about DRY: Don't Repeat yourself . Their reply was : "It is a fascinating article, but never works in practice". I just told them if we do not make it 100% DRY, we can at least use interfaces, but that also was not considered. *(interfaces can be added for new features, not touching the previous codebase, if they are not ready to refactor) All senior developers do maintenance and hot fixing of patches. The rest of the time they just spend on entertainment sites. They are just happy to finish the task. Introducing new technology is bad? *(including factor of feasibility can be done). Manager also least concerned about the things which I am talking about. Junior expects they can learn many things from team lead. *(not by asking help or senior coding for them). My questions are: Am I too aggressive about the changes which I am proposing? What should I expect from senior dev leads who have 8+ years experience? Am I wrong to expect to learn and gain experience from a company? Update : Why they feel DRY is impractical: because they don't want get involved with OOP concepts. They are happy with repeating tasks. New technologies I am proposing: Usage of Minification of CSS, JS, SPrite images Usage of Interfaces and .net framework 4, generics and many others. Client side libraries such as modernizr, knockout js, bootstrap for responsive,
Am I too aggressive about the changes which i am proposing ? Without specifics (what new techs you're proposing, why they're rejecting them, where they feel that DRY is impractical and why, etc), it's hard to evaluate the amount of merit to your proposals and that's important for your aggressiveness. If you want them to use a new framework because you think it's new and cool, then pushing more than lightly is too aggressive. If they're really slamming thousands of lines of copy/paste into the codebase (i.e. they're writing crap) then I'd say more aggressiveness is warranted. But this also depends on the interpersonal dynamics too between you and them. My advice would be to ask yourself "could I demonstrate that my suggestions would benefit the company?" If the answer is yes, then I'd say you have some license to try to push. What should i expect from senior dev lead who have 8+yr ? This will run the gamut. You'll sometimes get some really sharp people that you can learn a lot from, both in terms of office politics navigation and technical considerations. Unfortunately, you also get a lot of this . You'll find no shortage of people whose 8+ years of experience basically just amounts to doing the bare minimum not to get fired. If you find a mentor or someone who is really sharp, hold onto that as much as you can because it's less common than it ought to be. Am I wrong to expect good learning from a company? People to learn from are out there and they're at some companies. You seem to be faced with a common dilemma and, to paraphrase the .NET Rocks guys, this is worth considering: "Change your company... or change your company." Meaning, if you believe in certain core approaches and principles and you're finding yourself consistently unable to sell them and gain the freedom to do and learn things that you want to do and learn, it's worth considering a search for a company that's a better fit for you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186269", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33673/" ] }
186,277
This is a bit pedantic, but I've seen some people use Id as in: private int userId; public int getUserId(); and others use: private int userID; public int getUserID(); Is one of these a better name than the other? Why? I've seen this done very inconsistently in large projects. If I were to set a standard which would most people be familiar with? Which is the conventional standard?
Consistency is king; pick one or the other, but do it consistently everywhere. That said, I prefer the first variation, because it doesn't violate camelCase (doing so means you have two style rules to remember, not just one). Two capital letters is sometimes used because of this , but an ID is really just a form of Id-entification.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186277", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2279/" ] }
186,293
Where does the word "argument" (in the programming sense) come from? i.e. Why are actual parameters called "arguments"? The meanings don't seem related, and I haven't found any explanation of it anywhere. Note on the terminology: "Formal" parameters (also known simply as "parameters") are the "placeholder" names (say, x ) -- the declared parameters of a function. "Actual" parameters (also known as "arguments") are the actual values which are passed to a function (say, 5 ), hence I used this term above to prevent any confusion.
The term was adopted by computer scientists when they applied mathematical reasoning to programming in the mid 20th century. The word argument has the general sense of something from which another thing may be deduced . It comes ‘from the L. arguere “make clear, make known, prove, declare, demonstrate,” from PIE * argu-yo- , from root * arg- “to shine, be white, bright, clear” ’, which root is also preserved in the words argent (“silvery white”) and Argentina (“[river] of silver”). ¹ Its use in English to mean a “ mathematical quantity from which another … quantity may be deduced, or on which its calculation depends ” is attested as early as 1386: Argument (ā·ɹgi u měnt). [a. F. argument (13th c.), ad. L. argūment-um , f. arguěre (or refashioning, after this, of OF. arguement , f. arguer ): see A RGUE . For use of the L. form, see 3 c.] 2 . Astr. and Math. The angle, arc, or other mathematical quantity, from which another required quantity may deduced, or on which its calculation depends. c 1386 C HAUCER Frankl. T. 549 Hise othere geeris, As been his centris and hise Argumentz. c 1391 — Astrol. xliv. 54 To knowe the mene mote and the argumentis of any planete. 1796 H UTTON Math. Dict. I. 141/2 Annual argument of the moon’s apogee . . is the distance of the sun’s place from the place of the moon’s apogee. 1879 T HOMPSON & T AIT Nat. Phil. I. 1. § 54 An arc of the circle referred to . . is the Argument of the harmonic motion. ²
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186293", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11833/" ] }
186,663
What traits / skills does the Business Intelligence Developer role expect over a traditional Web Developer role?
First of all, let's define Business Intelligence . It's simply trying to make sense of the data a business already has. As an example, consider a company which sells toys, and stores a record for each toy it sells. This record contains the information of the country in which the toy is sold. Now, the manager of the company wants to see in which country the sales is higher, so that the next year, the distribution plan would be more efficient for that country. He/She needs a report of the sales figures in different countries. This is an example of business intelligence. Now to get to this report, somebody has to get the data out of the database (storage place, anywhere, even an Excel file). But wait, what if the total records of the data you have in your company exceeds, say for example, 50 million records? Do you want to query over them each time you want to create that report? Even worst than that, what if your database is under a huge amount of transaction and many records are getting inserted into it, while you try to execute a very costly query upon it? These problems resulted in some science to grew out of the solutions developers proposed. For example, you might create another database, and run a job each night to replicate these databases, so that tomorrow you can execute your query on a database which is not under live transactions. Some concepts come to mind here, like OLAP (Online Analytical Processing) vs. OLTP (Online Transactional Processing), Data Warehousing , Data Mining , Cubes, Tools for BI like SQL Server Reporting Services and SQL Server Analysis Services, and many other concepts, which are not related to being a web developer at all.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186663", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62298/" ] }
186,696
I know that when building applications (native or web) such as those in the Apple AppStore or Google Play app store that it's very common to use a Model-View-Controller architecture. However, is it reasonable to also create applications using the Component-Entity-System architecture common in game engines?
However, is it reasonable to also create applications using the Component-Entity-System architecture common in game engines? To me, absolutely. I work in visual FX and studied a wide variety of systems in this field, their architectures (including CAD/CAM), hungry for SDKs and any papers that would give me a sense of the pros and cons of the seemingly-infinite architectural decisions that could be made, with even the most subtle ones not always making a subtle impact. VFX is rather similar to games in that there is one central concept of a "scene", with viewports that display the rendered results. There also tends to be a lot of central loopy processing going on revolving around this scene constantly in animation contexts, where there might be physics happening, particle emitters spawning particles, meshes being animated and rendered, motion animations, etc, and ultimately to render them all to the user at the end. Another similar concept to at least very complex game engines was the need for a "designer" aspect where designers could flexibly design scenes, including the ability do some lightweight programming of their own (scripts and nodes). I found, over the years, that ECS made the best fit. Of course that's never completely divorced from subjectivity, but I would say it strongly appeared to give the fewest problems. It solved a lot more major problems we were always struggling with, while only giving us some few new minor ones in return. Traditional OOP More traditional OOP approaches can be really strong when you have a firm grasp of the design requirements upfront but not the implementation requirements. Whether through a flatter multiple interface approach or a more nested hierarchical ABC approach, it tends to cement the design and make it more difficult to change while making the implementation easier and safer to change. There's always a need for instability in any product that goes past a single version, so OOP approaches tend to skew stability (difficulty of change and lack of reasons for change) towards the design level, and instability (ease of change and reasons for change) to the implementation level. However, against evolving user-end requirements, both design and implementation may need to frequently change. You might find something weird like a strong user-end need for the analogical creature that needs to be both plant and animal at the same time, completely invalidating the entire conceptual model you built. Normal object-oriented approaches don't protect you here, and can sometimes make such unanticipated, concept-breaking changes even harder. When very performance-critical areas are involved, the reasons for design changes further multiply. Combining multiple, granular interfaces to form the conforming interface of an object can help a lot in stabilizing client code, but it doesn't help in stabilizing the subtypes which could sometimes dwarf the number of client dependencies. You can have one interface being used by only part of your system, for example, but with a thousand different subtypes implementing that interface. In that case, maintaining the complex subtypes (complex because they have so many disparate interface responsibilities to fulfill) can become the nightmare rather than the code using them through an interface. OOP tends to transfer complexity to the object level, while ECS transfers it to the client ("systems") level, and that can be ideal when there are very few systems but a whole bunch of conforming "objects" ("entities"). A class also owns its data privately, and thus can maintain invariants all on its own. Nevertheless, there are "coarse" invariants that can actually still be hard to maintain when objects interact with each other. For a complex system as a whole to be in a valid state often needs to consider a complex graph of objects, even if their individual invariants are properly maintained. Traditional OOP-style approaches can help with maintaining granular invariants, but can actually make it difficult to maintain broad, coarse invariants if the objects focus on teeny facets of the system. That's where these kinds of lego-block-building ECS approaches or variants can be so helpful. Also with systems being coarser in design than the usual object, it becomes easier to maintain those kinds of coarse invariants at the bird's-eye view of the system. A lot of teeny object interactions turn into one big system focusing on one broad task instead of teeny little objects focusing on teeny little tasks with a dependency graph that would cover a kilometer of paper. Yet I had to look outside of my field, at the gaming industry, to learn about ECS, though I was always one of a data-oriented mindset. Also, funnily enough, I almost made my way towards ECS on my own just iterating through and trying to come up with better designs. I didn't make it all the way though and missed a very crucial detail, which is the formalization of the "systems" part, and squashing components all the way down to raw data. I'll try to go through how I ended up settling on ECS, and how it ended up solving all the problems with previous design iterations. I think that'll help to highlight exactly why the answer here could be a very strong "yes", that ECS is potentially applicable far beyond the gaming industry. 1980s Brute Force Architecture The first architecture I worked on in the VFX industry had a long legacy that was already going past a decade since I joined the company. It was brute force crude C coding all the way (not a slant on C, as I love C, but the way it was being used here was really crude). A miniature and oversimplistic slice resembled dependencies like this: And this is an enormously simplified diagram of one tiny piece of the system. Each of these clients in the diagram ("Rendering", "Physics", "Motion") would get some "generic" object through which they would check a type field, like so: void transform(struct Object* obj, const float mat[16]) { switch (obj->type) { case camera: // cast to camera and do something with camera fields break; case light: // cast to light and do something with light fields break; ... } } Of course with significantly uglier and more complex code than this. Often additional functions would be called from these switch cases which would recursively do the switch again and again and again. This diagram and code might almost look like ECS-lite, but there was no strong entity-component distinction (" is this object a camera?", not "does this object provide motion?"), and no formalization of "system" (just a bunch of nested functions going all over the place and mixing up responsibilities). In that case, just about everything was complicated, any function was a potential for a disaster waiting to happen. Our testing procedure here often had to check things like meshes separate from other types of items, even if the identical thing was happening to both, since the brute force nature of the coding here (often accompanied by a lot of copy and paste) often made it very probable that what is otherwise the exact same logic could fail from one item type to the next. Trying to extend the system to handle new types of items was pretty hopeless, even though there was a strongly-expressed user-end need, as it was too difficult when we were struggling so much just to handle the existing types of items. Some pros: Uhh... doesn't take any engineering experience, I guess? This system does not require any knowledge of even basic concepts like polymorphism, it's totally brute force, so I guess even a beginner might be able to understand some of the code even if a pro at debugging can barely maintain it. Some cons: Maintenance nightmare. Our marketing team actually felt the need to boast the we fixed over 2000 unique bugs in one 3-year cycle. To me that's something to be embarrassed about that we had so many bugs in the first place, and that process probably still only fixed around 10% of the bugs total which were growing in number all the time. About the most inflexible solution possible. 1990s COM Architecture Most of the VFX industry uses this style of architecture from what I've gathered, reading documents about their design decisions and glancing at their software development kits. It may not exactly be COM at the ABI level (some of these architectures could only have plugins written using the same compiler), but shares a lot of similar characteristics with interface queries made on objects to see what interfaces their components support. With this kind of approach, the analogical transform function above came to resemble this form: void transform(Object obj, const Matrix& mat) { // Wrapper that performs an interface query to see if the // object implements the IMotion interface. MotionRef motion(obj); // If the object supported the IMotion interface: if (motion.valid()) { // Transform the item through the IMotion interface. motion->transform(mat); ... } } This is the approach the new team of that old codebase settled on, to eventually refactor towards. And it was a dramatic improvement over the original in terms of flexibility and maintainability, but there were still some issues I'll cover in the next section. Some pros: Dramatically more flexible/extensible/maintainable than the previous brute force solution. Promotes a strong conformance to many principles of SOLID by making every interface completely abstract (stateless, no implementation, only pure interfaces). Some cons: Lots of boilerplate. Our components had to be published through a registry in order to instantiate objects, the interfaces they supported required both inheriting ("implementing" in Java) the interface and providing some code to indicate which interfaces were available in a query. Promoted duplicated logic all over the place as a result of the pure interfaces. For example, all components that implemented IMotion would always have the exact same state and exact same implementation for all of the functions. To mitigate this, we'd start centralizing base classes and helper functionality throughout the system for the things that would tend to be redundantly implemented the same way for the same interface, and possibly with multiple inheritance going on behind the hood, but it was pretty messy under the hood even though the client code had it easy. Inefficiency: vtune sessions often showed the basic QueryInterface function almost always showing up as a middle to upper hotspot, and occasionally even the #1 hotspot. To mitigate that, we'd do things like have rendering parts of the codebase cache a list of objects already known to support IRenderable , but that significantly escalated the complexity and maintenance costs. Likewise, this was more difficult to measure but we noticed some definite slowdowns compared to the C-style coding we were doing before when every single interface required a dynamic dispatch. Things like branch mispredictions and optimization barriers are difficult to measure outside a little facet of code, but the users were just generally noticing the responsiveness of the user interface and things like that getting worse by comparing previous and newer versions of the software side-by-side for areas where the algorithmic complexity didn't change, only the constants. Was still difficult to reason about correctness at a broader system level. Even though it was significantly easier than the previous approach, it was still hard to grasp the complex interactions between objects throughout this system, especially with some of the optimizations that started to become necessary against it. We had trouble getting our interfaces correct. Even though there might only be one broad place in the system that uses an interface, user-end requirements would change over versions, and we would end up having to do cascading changes to all classes that implement the interface to accommodate a new function added to the interface, e.g., unless there was some abstract base class that was already centralizing the logic under the hood (some of these would manifest in the middle of these cascading changes in hopes of not repeatedly doing this again and again). Pragmatic Response: Composition One of the the things we were noticing before (or at least I was) that was causing issues was that IMotion might be implemented by 100 different classes but with the exact same implementation and state associated. Furthermore, it would only be used by a handful of systems like rendering, keyframed motion, and physics. So in such a case, we might have like a 3-to-1 relationship between the systems using the interface to the interface, and a 100-to-1 relationship between the subtypes implementing the interface to the interface. The complexity and maintenance then would be drastically skewed to the implementation and maintenance of 100 subtypes, instead of 3 client systems who depend on IMotion . This shifted all of our maintenance difficulties to the maintenance of these 100 subtypes, not the 3 places using the interface. Updating 3 places in the code with few or no "indirect efferent couplings" (as in dependencies to it but indirectly through an interface, not a direct dependency), no big deal: updating 100 subtype places with a boatload of "indirect efferent couplings", pretty big deal *. * I realize it's odd and wrong to screw with the definition of "efferent couplings" in this sense from an implementation perspective, I just haven't found a better way to describe the maintenance complexity associated when both interface and corresponding implementations of a hundred subtypes must change. So I had to push hard but I proposed that we try to become a little more pragmatic and relax the whole "pure interface" idea. It made no sense to me to make something like IMotion completely abstract and stateless unless we saw a benefit to it having a rich variety of implementations. In our case, for IMotion to have a rich variety of implementations would actually turn into quite a maintenance nightmare, as we didn't want variety. Instead we were iterating towards trying to make a single motion implementation that's really good against changing client requirements, and often were working around the pure interface idea a lot trying to force every implementor of IMotion to use the same implementation and state associated so that we don't duplicate goals. Interfaces thus became more like broad Behaviors associated with an entity. IMotion would simply become a Motion "component" (I changed the way we defined "component" away from COM to one where is closer to the usual definition, of a piece making up a "complete" entity). Instead of this: class IMotion { public: virtual ~IMotion() {} virtual void transform(const Matrix& mat) = 0; ... }; We evolved it to something more like this: class Motion { public: void transform(const Matrix& mat) { ... } ... private: Matrix transformation; ... }; This is a blatant violation of the dependency inversion principle to start shifting away from the abstract back to the concrete, but to me such a level of abstraction is only useful if we can foresee a genuine need in some future, beyond a reasonable doubt and not exercising ridiculous "what if" scenarios completely detached from user experience (which would probably require a design change anyway), for such flexibility. So we started evolving to this design. QueryInterface became more like QueryBehavior . Furthermore, it started to seem pointless to use inheritance here. We used composition instead. Objects turned into a collection of components whose availability could be queried and injected at runtime. Some pros: Was a lot easier to maintain still in our case than the previous, pure-interface COM-style system. Unforeseen surprises like a change in requirements or workflow complaints could be accommodated more easily with one very central and obvious Motion implementation, e.g., and not dispersed across a hundred subtypes. Gave a whole new level of flexibility of the kind we actually needed. In our previous system, since inheritance models a static relationship, we could only effectively define new entities at compile-time in C++. We couldn't do it from the scripting language, e.g. With the composition approach, we could string together new entities on the fly at runtime by merely attaching components to them and adding them to a list. An "entity" turned into a blank canvas upon which we could just throw together a collage of whatever we needed on the fly, with relevant systems automatically recognizing and processing these entities as a result. Some cons: We were still having a hard time in the efficiency department, and maintainability in the performance-critical areas. Each system would still end up wanting to cache components of entities that provided these behaviors to avoid looping through them all repeatedly and checking what was available. Each system demanding performance would do this ever so slightly-differently, and was prone to a different set of bugs in failing to update this cached list and possibly a data structure (if some form of search was involved like frustum culling or raytracing) on some obscure scene change event, e.g. There was still something awkward and complex that I couldn't put my finger on related to all these granular little behavioral, simple objects. We still spawned a lot of events to deal with interactions between these "behavior" objects that were sometimes necessary, and the result was very decentralized code. Each little object was easy to test for correctness and, taken individually, often were perfectly correct. Yet it still felt like we were trying to maintain a massive ecosystem composed of little villages and trying to reason about what they all individually do and add up to make as a whole. The C-style 80s codebase felt like one epic, overpopulated megalopolis which was definitely a maintenance nightmare, but dispersing that complexity across into too many teeny villages was also making it hard to think about the system at a bird's-eye view without getting overwhelmed by the complexity of the interactions between everything. Loss of flexibility with the lack of abstraction but in an area where we never actually encountered a genuine need for it, so hardly a practical con (though definitely at least a theoretical one). Preserving ABI compatibility was always hard, and this made it harder by requiring stable data and not just stable interface associated with a "behavior". However, we could easily add new behaviors and simply deprecate existing ones if a state change was needed, and that was arguably easier than doing backflips underneath the interfaces at the subtype-level to handle versioning concerns. One phenomena that occurred was that, since we lost the abstraction on these behavioral components, we had more of them. For example, instead of an abstract IRenderable component, we'd attach an object with a concrete Mesh or PointSprites component. The rendering system would know how to render Mesh and PointSprites components and would find entities that provide such components and draw those. At other times, we had miscellaneous renderables like SceneLabel that we discovered we needed in hindsight, and so we'd attach a SceneLabel in those cases to relevant entities (possibly in addition to a Mesh ). The rendering system implement would then be updated to know how to render entities that provided those, and that was a pretty easy change to make. In this case, an entity composed of components could also then be used as a component to another entity. We'd build things up that way by hooking up lego blocks. ECS: Systems and Raw Data Components That last system was as far as I made it on my own, and we were still bastardizing it with COM. It felt like it was wanting to become an entity-component system but I was not familiar with it at the time. I was looking around at COM-style examples which saturated my field, when I should have been looking at AAA game engines for architectural inspiration. I finally started doing that. What I was missing were several key ideas: The formalization of "systems" to process "components". "Components" being raw data rather than behavioral objects composed together into a bigger object. Entities as nothing more than a strict ID associated to a collection of components. I finally left that company and started working on an ECS as an indy (still working on it while draining my savings), and it has been the easiest system to manage by far. What I noticed with the ECS approach was that it solved the problems I was still struggling with above. Most importantly to me, it felt like we were managing healthy-sized "cities" instead of teeny little villages with complex interactions. It wasn't as hard to maintain as a monolithic "megalopolis", too big in its population to effectively manage, but wasn't as chaotic as a world filled with tiny little villages interacting with each other where just thinking about the trade routes in between them formed a nightmarish graph. ECS distilled all the complexity towards bulky "systems", like a rendering system, a healthy-sized "city" but not an "overpopulated megalopolis". Components becoming raw data felt really weird to me at first, as it breaks even the basic information hiding principle of OOP. It was kind of challenging one of the biggest values I held dear about OOP, which was its ability to maintain invariants which required encapsulation and information hiding. But it started to become a non-concern as it quickly became obvious what was happening with just a dozen or so broad systems transforming that data instead of such logic being dispersed across hundreds to thousands of subtypes implementing a combo of interfaces. I tend to think of it like still in an OOP-style fashion except spread out where the systems are providing the functionality and implementation that access the data, the components are providing the data, and the entities are providing components. It became even easier , counter-intuitively, to reason about the side effects caused by the system when there were just a handful of bulky systems transforming the data in broad passes. The system became a lot "flatter", my call stacks became shallower than ever before for each thread. I could think about the system at that overseer level and not run into weird surprises. Likewise, it made even the performance-critical areas simple with respect to eliminating those queries. Since the idea of "System" became very formalized, a system could subscribe to the components it was interested in, and just be handed a cached list of entities which satisfy that criteria. Each individual one didn't have to manage that caching optimization, it became centralized to a single place. Some pros: Seems to just solve almost every major architectural problem I was encountering in my career without ever feeling trapped in a design corner when encountering unanticipated needs. Some cons: I still have a hard time wrapping my head around it sometimes, and it's not the most mature or well-established paradigm even within the game industry, where people argue about exactly what it means and how to do things. It's definitely not something I could have done with the former team I worked with, which consisted of members deeply hooked to the COM-style mindset or the 1980s C-style mindset of the original codebase. Where I get confused sometimes is like how to model graph-style relationships between components, but I've always found a solution which didn't turn out to be horrible later where I can just make a component dependent upon another one ("this motion component depends on this other one as a parent, and the system will use memoization to avoid repeatedly doing the same recursive motion calculations", e.g.) ABI is still difficult, but so far I'd even venture to say that it's easier than pure interface approach. It's a shift in mindset: data stability becomes the sole focus for ABI, rather than interface stability, and in some ways it's easier to achieve data stability than interface stability (ex: no temptations to change a function just because it needs a new parameter. That kind of stuff happens inside coarse system implementations which don't break ABI). However, is it reasonable to also create applications using the Component-Entity-System architecture common in game engines? So anyway, I'd say absolutely "yes", with my personal VFX example being a strong candidate. But that's still fairly similar to the needs of gaming. I haven't put it to practice in more remote areas completely detached from the concerns of game engines (VFX is quite similar), but it seems to me like far more areas are good candidates for an ECS approach. Maybe even a GUI system would be suitable for one, but I still use a more OOP approach there (but without deep inheritance unlike Qt, e.g.). It's widely-unexplored territory, but it seems suitable to me whenever your entities can be composed of a rich combination of "traits" (and exactly what combo of traits they provide being ever subject to change), and where you have a handful of generalized systems that process entities that have the necessary traits. It becomes a very practical alternative in those cases to any scenario where you might be tempted to use something like multiple inheritance or an emulation of the concept (mixins, e.g.) only to produce hundreds or more combos in a deep inheritance hierarchy or hundreds of combos of classes in a flat hierarchy implementing a specific combo of interfaces, but where your systems are few in number (dozens, e.g.). In those cases, the complexity of the codebase starts to feels more proportional to the number of systems instead of the number of type combinations, since each type is now just an entity composing components which are nothing more than raw data. GUI systems naturally fit these kinds of specs where they might have hundreds of possible widget types combined from other base types or interfaces, but only a handful of systems to process them (layout system, rendering system, etc). If a GUI system used ECS, it would probably be so much easier to reason about the correctness of the system when all the functionality is provided by a handful of these systems instead of hundreds of different object types with inherited interfaces or base classes. If a GUI system used ECS, widgets would have no functionality, only data. Only the handful of systems that process widget entities would have functionality. How overridable events for a widget would be handled is beyond me, but just based on my limited experience so far, I haven't found a case where that type of logic could not be transferred centrally to a given system in a way that, in hindsight, yielded a much more elegant solution that I'd ever expect. I would love to see it employed in more fields, as it was a lifesaver in mine. Of course it's ill-suited if your design doesn't break down this way, from entities aggregating components to coarse systems that process those components, but if they naturally fit this kind of model, it is the most wonderful thing I've encountered yet.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186696", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18210/" ] }
186,727
I recently started at a new company, with a handful of programmers. Its a medium sized company, with around 70 employees, but IT only has 9-10, and there are 3 "programmers" beside myself. However, these guys have very limited experience and are doing a lot of stuff really terribly. For example, one of our projects is a PHP website. The majority of the code is stored in a 20,000 line PHP controller, with ~6000 lines of JavaScript embedded in the PHP. I keep making small suggestions here and there but nobody has been listening, everyone says they are too busy to implement my suggestions. The thing is, they shouldn't be that busy, and wouldn't be if things were done right. They spend most of their time fixing things that keep breaking. If each project was built correctly, I could do it all myself. What approach should I take to convince these guys or the manager that things need to change, and that changing things will save a bunch of time? Should I skip trying to convince my coworkers and go straight to the manager, with a business-y proposal on how the company will save a bunch of money if they start doing things correct?
I've found that the primary cause for sloppy work, outside of the programmer simply not caring, is a lack of knowledge. Unfortunately in a lot of environments, lack of knowledge is looked down on rather than openly discussed. Some techniques that I've used with success to foster discussion, growth, and general excitement about programming are: Weekly brown bag tech sessions (Have them research a topic and present). Daily or weekly one on one mentoring sessions between junior and senior members. Code reviews (with an emphasis on learning, not pointing out mistakes). Learning is contagious. When you foster an environment that encourages learning you not only produce better developers, but show others on your team that they are part of something bigger than a way to get a paycheck.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186727", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3382/" ] }
186,748
Firstly, I'm not exactly sure if this question is a better fit over here, or on workplace.SE. So forgive me if it is in the wrong place. We are interviewing some candidates for a development position, and currently they are not in our city. We would like to give them simple coding tests to see how they will perform on the typical issues that we face in our daily work. Are there any specific tools geared towards this? Right now we are using Skype and I feel this tends to decrease the performance of a lot of developers since they tend to be shy, and often can't work when someone is directly staring at them. The problem with sending them the test questions by email are as follows: It is not possible to know what their thought process is, sine we just see the end result. There is no discussion, or clarification of the question, which is an Important step. There is no guarantee that the problems were solved by the candidates themselves. They could send it to a smarter friend, and we wouldn't be able to know. How are these problems usually solved?
Google uses a shared Google Docs document between the interviewer and candidate while talking over the phone. They share the document, which is preset to a fixed-width font, to the candidate in advance with the confirmation email. A Bluetooth headset or speakerphone is recommended for hands-free coding during the phone interview.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186748", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1426/" ] }
186,761
One of my team members, a junior programmer, has impressive programming skills for his level of experience. And during code reviews, I believe in emphasizing learning, not pointing out mistakes. But should junior programmers be involved in code reviews for more senior programmers? Or should code reviews be attended only by programmers with corresponding experience?
The primary purpose of a code review is to find defects or potential problems. The required participants in the review should be the people who are best suited to identify these problems, regardless of their title or seniority. As an example, if an application is being developed in Python and the junior engineer has more experience with the Python language than the senior engineer who wrote the code, then they might be a valuable asset in pointing out alternative methods of doing something, but they may also have less knowledge of the system as a whole. Beyond the experience in the tools and technologies, also consider experience in the application domain. Someone with 20 years of experience but only 1 or 2 in the financial industry may be helped by having an overall less experienced developer with only 5 years of experience all in the financial industry review his work. Inviting less experienced staff members to observe and participate as much as possible the code review process may also be beneficial to allow them to learn a code base, ask questions, and learn about what is expected of them in not only code reviews, but in the code that they produce. However, you probably don't want too many people involved (focusing instead on the people who can fully support the code review and its purpose) in the process. This really applies to any kind of review - requirements, design, code...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186761", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63715/" ] }
186,788
As an example, if I have a Post service and have a method to retrieve all posts for the logged in user, is it OK to have a findPosts() method that uses an injected Security service to get the user ID and then pass that user ID to my post DAO to get the records? Or should the findPosts() method explicitly require the user ID as an argument?
The primary purpose of a code review is to find defects or potential problems. The required participants in the review should be the people who are best suited to identify these problems, regardless of their title or seniority. As an example, if an application is being developed in Python and the junior engineer has more experience with the Python language than the senior engineer who wrote the code, then they might be a valuable asset in pointing out alternative methods of doing something, but they may also have less knowledge of the system as a whole. Beyond the experience in the tools and technologies, also consider experience in the application domain. Someone with 20 years of experience but only 1 or 2 in the financial industry may be helped by having an overall less experienced developer with only 5 years of experience all in the financial industry review his work. Inviting less experienced staff members to observe and participate as much as possible the code review process may also be beneficial to allow them to learn a code base, ask questions, and learn about what is expected of them in not only code reviews, but in the code that they produce. However, you probably don't want too many people involved (focusing instead on the people who can fully support the code review and its purpose) in the process. This really applies to any kind of review - requirements, design, code...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186788", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81284/" ] }
186,842
According to the Wikipedia article on Spurious Wakeups "a thread might be awoken from its waiting state even though no thread signaled the condition variable". While I've know about this 'feature' I never knew what actually caused it until, in the same article "Spurious wakeups may sound strange, but on some multiprocessor systems, making condition wakeup completely predictable might substantially slow all condition variable operations." Sounds like a bug that just isn't worth fixing, is that right?
TL;DR Assumption ("contract") of spurious wakeups is a sensible architectural decision made to allow for realistically robust implementations of thread sheduler. "Performance considerations" are irrelevant here, these are just misunderstanding that became widespread because of having stated in a published authoritative reference. (authoritative references might have errors, y'know - just ask Galileo Galilei ) Wikipedia article keeps the reference to the note you quoted just because it perfectly matches their formal guidelines of citing the published reference. Much more compelling reason for introducing concept of spurious wakeups is provided in this answer at SO that is based on additional details provided in an (older version) of that very article: The Wikipedia article on spurious wakeups has this tidbit: The pthread_cond_wait() function in Linux is implemented using the futex system call. Each blocking system call on Linux returns abruptly with EINTR when the process receives a signal. ... pthread_cond_wait() can't restart the waiting because it may miss a real wakeup in the little time it was outside the futex system call... Just think of it... like any code, thread scheduler may experience temporary blackout due to something abnormal happening in underlying hardware / software. Of course, care should be taken for this to happen as rare as possible, but since there's no such thing as 100% robust software it is reasonable to assume this can happen and take care on the graceful recovery in case if scheduler detects this (eg by observing missing heartbeats ). Now, how could scheduler recover, taking into account that during blackout it could miss some signals intended to notify waiting threads? If scheduler does nothing, mentioned "unlucky" threads will just hang, waiting forever - to avoid this, scheduler would simply send a signal to all the waiting threads. This makes it necessary to establish a "contract" that waiting thread can be notified without a reason. To be precise, there would be a reason - scheduler blackout - but since thread is designed (for a good reason) to be oblivious to scheduler internal implementation details, this reason is likely better to present as "spurious". From thread perspective, this somewhat resembles a Postel's law (aka robustness principle ), be conservative in what you do, be liberal in what you accept from others Assumption of spurious wakeups forces thread to be conservative in what it does : set condition when notifying other threads, and liberal in what it accepts : check the condition upon any return from wait and repeat wait if it's not there yet.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186842", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36039/" ] }
186,883
My coworker who is a senior guy is blocking me on a code review because he wants me to name a method 'PerformSqlClient216147Workaround' because it's a workaround for some defect ###. Now, my method name proposal is something like PerformRightExpressionCast which tends to describe what the method actually does. His arguments go along the line of: "Well this method is used only as a workaround for this case, and nowhere else." Would including the bug number inside of the method name for a temporary workaround be considered bad practice?
I would not name the method as your co-worker suggested. The method name should indicate what the method does. A name like PerformSqlClient216147Workaround does not indicate what it does. If anything, use comments that describe the method to mention that it is a workaround. This could look like the following: /** * Cast given right-hand SQL expression. * * Note: This is a workaround for an SQL client defect (#216147). */ public void CastRightExpression(SqlExpression rightExpression) { ... } I agree with MainMa that bug/defect numbers should not appear in the source code itself but rather in the source control comments as this is meta-data, but it's not terrible if they appear in the source code comments. Bug/defect numbers should never be used in the names of methods.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186883", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81340/" ] }
186,889
The global interpreter lock (GIL) seems to be often cited as a major reason why threading and the like is a touch tricky in Python - which raises the question "Why was that done in the first place?" Being Not A Programmer, I've got no clue why that might be - what was the logic behind putting in the GIL?
There are several implementations of Python, for example, CPython, IronPython, RPython, etc. Some of them have a GIL, some don't. For example, CPython has the GIL: From http://en.wikipedia.org/wiki/Global_Interpreter_Lock Applications written in programming languages with a GIL can be designed to use separate processes to achieve full parallelism, as each process has its own interpreter and in turn has its own GIL. Benefits of the GIL Increased speed of single-threaded programs. Easy integration of C libraries that usually are not thread-safe. Why Python (CPython and others) uses the GIL From http://wiki.python.org/moin/GlobalInterpreterLock In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once. This lock is necessary mainly because CPython's memory management is not thread-safe. The GIL is controversial because it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations. Note that potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen outside the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck. From http://www.grouplens.org/node/244 Python has a GIL as opposed to fine-grained locking for several reasons: It is faster in the single-threaded case. It is faster in the multi-threaded case for i/o bound programs. It is faster in the multi-threaded case for cpu-bound programs that do their compute-intensive work in C libraries. It makes C extensions easier to write: there will be no switch of Python threads except where you allow it to happen (i.e. between the Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS macros). It makes wrapping C libraries easier. You don't have to worry about thread-safety. If the library is not thread-safe, you simply keep the GIL locked while you call it. The GIL can be released by C extensions. Python's standard library releases the GIL around each blocking i/o call. Thus the GIL has no consequence for performance of i/o bound servers. You can thus create networking servers in Python using processes (fork), threads or asynchronous i/o, and the GIL will not get in your way. Numerical libraries in C or Fortran can similarly be called with the GIL released. While your C extension is waiting for an FFT to complete, the interpreter will be executing other Python threads. A GIL is thus easier and faster than fine-grained locking in this case as well. This constitutes the bulk of numerical work. The NumPy extension releases the GIL whenever possible. Threads are usually a bad way to write most server programs. If the load is low, forking is easier. If the load is high, asynchronous i/o and event-driven programming (e.g. using Python's Twisted framework) is better. The only excuse for using threads is the lack of os.fork on Windows. The GIL is a problem if, and only if, you are doing CPU-intensive work in pure Python. Here you can get cleaner design using processes and message-passing (e.g. mpi4py). There is also a 'processing' module in Python cheese shop, that gives processes the same interface as threads (i.e. replace threading.Thread with processing.Process). Threads can be used to maintain responsiveness of a GUI regardless of the GIL. If the GIL impairs your performance (cf. the discussion above), you can let your thread spawn a process and wait for it to finish.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186889", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37340/" ] }
186,959
I have been having a debate about what to do with a trailing slash in a RESTful API. Lets say I have a resource called dogs and subordinate resources for individual dogs. We can therefore do the following: GET/PUT/POST/DELETE http://example.com/dogs GET/PUT/POST/DELETE http://example.com/dogs/{id} But what do we do with the following special case: GET/PUT/POST/DELETE http://example.com/dogs/ My personal view is that this is saying send a request to an individual dog resource with id = null . I think the API should return a 404 for this case. Others say the request is accessing the dogs resource i.e. the trailing slash is ignored. Does anyone know the definitive answer?
None of this is authoritative (as REST has no exact meaning). But from the original paper on REST a full (not ending in /) URL names a resource, while one ending in a slash '/' is a resource group (probably not worded like that). A GET of a URL with a slash on the end is supposed to list the resources available. GET http://example.com/dogs/ /* List all the dogs resources */ A PUT on a URL with a slash is supposed to replace all the resources. PUT http://example.com/dogs/ /* Replace all the dogs resources */ A DELETE on a URL with a slash is supposed to delete all the resources DELETE http://example.com/dogs/ /* Deletes all the dogs resources */ A POST on a URL with a slash is supposed to create a new resource can then be subsequently accessed. To be conformant the new resource should be in this directory (though lots of RESTful architectures cheat here). POST http://example.com/dogs/ /* Creates a new dogs resource (notice singular) */ etc. The Wikipedia page on the subject seems to explain it well: https://en.wikipedia.org/wiki/Representational_state_transfer#Applied_to_web_services .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/186959", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/74944/" ] }
187,126
So, I work in .Net. I make open source projects in .Net. One of my biggest problems with it isn't necessariyl with .Net, but with the community and frameworks around it. It seems everywhere that magical naming schemes and strings is treated as the best way to do everything. Bold statement, but look at it: ASP.Net MVC: Hello world route: routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); What this means is that ASP.Net MVC will somehow look up HomeController in your code. Somehow make a new instance of it, and then call the function Index apparently with an id parameter of some sort. And then there are other things like: RenderView("Categories", categories); ...or.. ViewData["Foobar"]="meh"; And then there are similar things with XAML as well. DataContext is treated as an object and you have to hope and pray that it resolves to the type you want. DependencyProperties must use magic strings and magic naming conventions. And things like this: MyData myDataObject = new MyData(DateTime.Now); Binding myBinding = new Binding("MyDataProperty"); myBinding.Source = myDataObject; Although it relies more on casting and various magical runtime supports. Anyway, I say all that to end up here: Why is this so well tolerated in the .Net world? Aren't we using statically typed languages to almost always know what the type of things are? Why is reflection and type/method/property/whatever names(as strings) prefered so much in comparison to generics and delegates or even code generation? Are there inherit reasons that I'm missing for why ASP.Net's routing syntax relies almost exclusively on reflection to actually resolve how to handle a route? I hate when I change the name of a method or property and suddenly things break, but there don't appear to be any references to that method or property and there are of course no compiler errors. Why was the apparent convenience of magic strings considered "worth it"? I know there are also commonly statically typed alternatives to some things, but they usually take a backseat and seem to never be in tutorials or other beginner material.
Actually there is a push back in the .NET world against these very things you mentioned. In the first example you gave however, the routing engine is given a convention for mapping the default route. The very fact that the routes are dynamic make it nigh impossible to use a static configuration. You also mention XAML/WPF, both of which were under development well before generics were introduced into .NET and going back to support generics would have delayed an already very late product (Longhorn/Vista) even further. There are examples within the ASP.NET MVC framework of using lambda expressions in place of magic strings and the Entity Framework/LINQ takes it even further where the language and framework provides native support for composing SQL queries over a static object graph (instead of constructing magic SQL strings, you get compile time validation of your queries). For other examples of static configuration see structuremap and other modern dependency injection containers, and other frameworks that need to inspect the object graph at runtime but allow the developer to statically provide hints using lambda expressions. So the short answer is that historically, .NET did not support static traversal of an object graph until the 3.5 release. Now that we have it, many developers prefer it over magic strings and many have been pushing for even deeper support such as a symbolOf operator that works similar to the typeOf operator.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187126", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/483/" ] }
187,133
I'm wondering what is better in terms of good OOP desing, clean code, flexibility and avoiding code smells in the future. Image situation, where you have a lot of very similar objects you need to represent as classes. These classes are without any specific functionality, just data classes and are different just by name (and context) Example: Class A { String name; string description; } Class B { String name; String count; String description; } Class C { String name; String count; String description; String imageUrl; } Class D { String name; String count; } Class E { String name; String count; String imageUrl; String age; } Would it be better to keep them in separate classes, to get "better" readability, but wit many code repetitions, or would it be better to use inheritance to be more DRY? By using inheritance, you will end up with something like this, but class names will lost it contextual meaning (because of the is-a, not has-a): Class A { String name; String description; } Class B : A { String count; } Class C : B { String imageUrl; } Class D : C { String age; } I know inheritance is not and should not be used for code reuse, but I don't see any other possible way how to reduce code repetition in such case.
The general rule reads "Prefer delegation over inheritance", not "avoid all inheritance". If the objects have a logical relationship and a B can be used wherever an A is expected, it is good practice to use inheritance. However, if the objects just happen to have fields with the same name, and have no domain relationship, do not use inheritance. Quite often, it pays off to ask the question of "reason of change" in the design process: If class A gets an extra field, would you expect that class B gets it as well? If that's true, the objects share a relationship, and inheritance is a good idea. If not, suffer the little repetition to keep distinct concepts in distinct code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187133", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66705/" ] }
187,159
if(true/false){ //if condition is true }else { // if condition is false } Everyone knows about the structure of if-else . But what is if actually? Is it a class method, static method or keyword? I'm little confused about this.
It seems to me that you are asking two different questions: How if else works internally? What if is in terms of high level programming ? This answer is for question 1. For question 2 see other answers. Gustav Bertram's answer seems very clear. Before your code gets executed, it must be translated into machine code . The translation is done by either a compiler (like a C-compiler) or an interpreter (like Ruby interpreter) or a virtual machine (like with Java). Control structures like if-else are translated into a pattern of machine code instructions. The actual pattern depends on the target machine and the compiler/interpreter/virtual machine. An if-else-structure like if (x >= y) { // main branch } else { // else branch } could be translated into code (here some kind of pseudo- assembly ) like cmp (r1),(r2) ;compare words at memory addresses pointed by registers r1 and r2 blt else ;jump to else-part if (r1) is less than (r2) ... ;insert the main branch code here bra end_if ;jump to end_if else: ... ;insert the else branch code here end_if: The actual machine code consists of just binary data but it is structurally similar with assembly code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187159", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81543/" ] }
187,169
On my filesystem (Windows 7) I have some text files (These are SQL script files, if that matters). When opened with Notepad++ , in the "Encoding" menu some of them are reported to have an encoding of "UCS-2 Little Endian" and some of "UTF-8 without BOM". What is the difference here? They all seem to be perfectly valid scripts. How could I tell what encodings the file have without Notepad++?
Files generally indicate their encoding with a file header. There are many examples here . However, even reading the header you can never be sure what encoding a file is really using . For example, a file with the first three bytes 0xEF,0xBB,0xBF is probably a UTF-8 encoded file. However, it might be an ISO-8859-1 file which happens to start with the characters  . Or it might be a different file type entirely. Notepad++ does its best to guess what encoding a file is using, and most of the time it gets it right. Sometimes it does get it wrong though - that's why that 'Encoding' menu is there, so you can override its best guess. For the two encodings you mention: The "UCS-2 Little Endian" files are UTF-16 files (based on what I understand from the info here ) so probably start with 0xFF,0xFE as the first 2 bytes. From what I can tell, Notepad++ describes them as "UCS-2" since it doesn't support certain facets of UTF-16. The "UTF-8 without BOM" files don't have any header bytes. That's what the "without BOM" bit means.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187169", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73447/" ] }
187,403
I have always been using this method: from sys import argv and use argv with just argv . But there is a convention of using this: import sys and using the argv by sys.argv The second method makes the code self documented and I (really) adhere to it. But the reason I prefer first method is it is fast because we are importing only the function that is needed rather than import the whole module (which contains more useless functions which python will waste time importing them). Note that I need just argv and all other functions from sys are useless to me. So my questions are. Does the first method really makes the script fast? Which method is preferred most? Why?
Importing the module doesn't waste anything ; the module is always fully imported (into the sys.modules mapping), so whether you use import sys or from sys import argv makes no odds. The only difference between the two statements is what name is bound; import sys binds the name sys to the module (so sys -> sys.modules['sys'] ), while from sys import argv binds a different name, argv , pointing straight at the attribute contained inside of the module (so argv -> sys.modules['sys'].argv ). The rest of the sys module is still there, whether you use anything else from the module or not. There is also no performance difference between the two approaches. Yes, sys.argv has to look up two things; it has to look up sys in your global namespace (finds the module), then look up the attribute argv . And yes, by using from sys import argv you can skip the attribute lookup, since you already have a direct reference to the attribute. But the import statement still has to do that work, it looks up the same attribute when importing, and you'll only ever need to use argv once . If you had to use argv thousands of times in a loop, it could perhaps make a difference, but in this specific case it really does not. Hence, the choice between one or the other should be based solely on coding style . In a large module, I'd certainly use import sys ; code documentation matters, and using sys.argv somewhere in a large module makes it much clearer what you are referring to than just argv ever would. If the only place you use argv is in a '__main__' block to call a main() function, by all means use from sys import argv if you feel happier about that: if __name__ == '__main__': from sys import argv main(argv) I'd still use import sys there myself. All things being equal (and they are, exactly, in terms of performance and number of characters used to write it), that is just easier on the eye for me. If you are importing something else altogether, then perhaps performance comes into play. But only if you use a specific name in a module many times over , in a critical loop for example. But then creating a local name (within a function) is going to be faster still: import somemodule def somefunction(): localname = somemodule.somefunctionorother while test: # huge, critical loop foo = localname(bar)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187403", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80796/" ] }
187,445
I'll be specific: Java 8 is promised to bring lambda expressions as well as method and constructor references among other things. As a Java developer I'm super psyched about that. In my day to day programming I see more and more opportunities where using these features would greatly simplify code that would otherwise be very verbose and tedious. In lieu of method and constructor references I started using more and more reflection and plan to migrate those code paths to Java 8 as soon as possible. I use special comments (like the well known TODO comments: JAVA8) that can be used by the IDE or grepped easily in order to find the relevant places. I also test all those cases extensively to make sure they work. But still I have to wonder whether it's good to do it like that. Is it acceptable to produce a little more brittle code now that will eventually become robust again? GA for Java 8 is September 2013 so it's not too far in the future (provided the release date doesn't slip). A kinda general example would be something like this: I want to create some container objects and fill them with data from a database. If I were to use the standard Java approach, it could look like this: class ContainerService { private Database database; private final Map, ContainerInitializer> INITIALIZERS = new HashMap(); { INITIALIZERS.put(Foo.class, new FooInitializer()); } public Container getContainer(Class cls) { return INITIALIZERS.get(cls).create(); } interface ContainerInitializer { Container create(); } class FooInitializer implements ContainerInitializer { Container create() { return new Container(database.getFoo()); } } } The reflective code is class ContainerService { private Database database; private final Map, String> INITIALIZERS = new HashMap(); { INITIALIZERS.put(Foo.class, "getFoo"); } public Container getContainer(Class cls) { Method m = Database.class.getMethod(INITIALIZERS.get(cls)); return new Container(m.invoke(database)); } } Note how all the intermediate interfaces and classes fall away. The Java 8 variant is something along the following lines: class ContainerService { private Database database; private final Map, ContainerInitializer> INITIALIZERS = new HashMap(); { INITIALIZERS.put(Foo.class, database::getFoo); } public Container getContainer(Class cls) { return new Container(INITIALIZER.get(cls).create()); } private interface ContainerInitializer { Container create(); } } This is slightly longer again but has type safety. Also it's trivial to get from the prepared code to the final code using method references. Of course the example is a bit simple. Imagine having a lot of types the container could contain. In the first method, there would be an extra class for each of them. In the other two methods, only data has to be added. It keeps everything so much simpler.
Don't fall for the chant of sirens. Their song talks about new features and performance improvements, but all you get by listening to that, is a neverending stream of pain, delays, procrastination, and deployment woes. Write for what exists . The rest is mere speculation.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187445", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57674/" ] }
187,457
In programming what is called Principle of Least Astonishment? How is this concept related to designing good APIs? Is this something applicable to only object oriented programming or does it permeate other programming techniques as well? Is this related to the principle of "doing a single thing in your method and do it well" ?
The Principle of Least Astonishment is applicable to a wide range of design activities - and not just in computing (though that is often where the most astonishing things happen). Consider an elevator with a button next to it that says "call". When you press the button, the payphone rings (rather than calling the elevator to that floor). This would be considered astonishing. The correct design would be to put the call button next to the phone rather than the elevator. Next, think of a web page that has a pop up window that shows a windows style error with an 'ok' button on it. People click the 'ok' button thinking it is for the operating system and instead go to another web page. This astonishes the user. When it comes to an API... Think about a toString() method that instead of printing out the fields returns back "to be implemented". An equals() method that works on hidden information. Sometimes people try to implement a sorted list class by changing the add method to call sort() on the array afterwards - which is astonishing because the add method is supposed to append to the list - this is especially astonishing when one gets back a List object with no knowledge that somewhere deep inside, someone violated the interface contract. Having a method that does one distinct thing contributes to reduction of astonishment, however these are separate principles in API design. The four principles often touted as "good API design" are (from this pdf - just one instance of such a presentation. The links at the end of this particular one make for good reading): Single responsibility principle Open closed principle DRY Principle of least astonishment It is potentially astonishing for someone to have a class that tries to do everything - or needing two classes to do a single thing. It is likewise potentially astonishing for someone to mess with the internals in odd ways under the covers (I find open classes in Ruby to be a source of never-ending astonishment). It is also likewise astonishing to find two methods that do apparently the same thing. As such, the principle of least astonishment underlies the other API designs - but it, itself, is not sufficient to simply say "don't have an astonishing API." Further reading (from the UI perspective) - an IBM developer blog titled The cranky user: The Principle of Least Astonishment
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187457", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60189/" ] }
187,478
We had a disagreement in a code review. What I had written: if(unimportantThing().isGood && previouslyCalculatedIndex != -1) { //Stuff } if(otherThing().isBad && previouslyCalculatedIndex != -1) { //Otherstuff } if(yetAnotherThing().isBad) { //Stuffystuff } The reviewer called that ugly code. This is what he expected: if( previouslyCalculatedIndex != -1) { if(unimportantThing().isGood) { //Stuff } if(otherThing().isBad) { //Otherstuff } } if(yetAnotherThing().isBad) { //Stuffystuff } I'd say it's a pretty trivial difference and that the complexity of adding another layer of branching is equivalently bad as one or two logical-ands. But just to check myself, is this really a grievous coding sin that you would take a firm stance over? Do you always pull out the common cases in your if statements and branch on them separately, or do you add logical-ands to a couple of if statements?
There are two reasons I would prefer the second over the first. The first is repetition: You repeat yourself which can lead to more difficult to maintain code. The second is that I think the second is more clear on when each block of code is executed. If I am debugging and I know that previouslyCalculatedIndex != -1 , then I can immediately skip two whole blocks of code in the second one by just checking one conditional. In the first, I am forced to check two conditionals. Double the work for the same outcome. Another thing to consider is that if statements are a big place to cause bugs. The longer the conditionals (the more && and || you use), the more chances you have of creating a bug by getting it wrong. Reducing the complexity of your conditionals can make it easier to understand later as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187478", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26034/" ] }
187,492
I just started playing around with async/await in .Net 4.5. One thing I'm initially curious about, why is the async keyword necessary? The explanation I read was that it is a marker so the compiler knows a method awaits something. But it seems like the compiler should be able to figure this out without a keyword. So what else does it do?
There are several answers here, and all of them talk about what async methods do, but none of them answer the question, which is why async is needed as a keyword that goes in the function declaration. It's not "to direct the compiler to transform the function in a special way"; await alone could do that. Why? Because C# already has another mechanism where the presence of a special keyword in the method body causes the compiler to perform extreme (and very similar to async/await ) transformations on the method body: yield . Except that yield isn't its own keyword in C#, and understanding why will explain async as well. Unlike in most languages that support this mechanism, in C# you can't say yield value; You have to say yield return value; instead. Why? Because it was added in to the language after C# already existed, and it was quite reasonable to assume that someone, somewhere, might have used yield as the name of a variable. But because there was no pre-existing scenario in which <variable name> return was syntactically correct, yield return got added to the language to make it possible to introduce generators while maintaining 100% backwards compatibility with existing code. And this is why async was added as a function modifier: to avoid breaking existing code that used await as a variable name. Since no async methods already existed, no old code is invalidated, and in new code, the compiler can use the presence of the async tag to know that await should be treated as a keyword and not an identifier.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187492", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/66745/" ] }
187,512
My company is using Git, and is using a peculiar branching scheme - work is done in master, and branches are reserved for releases. This works fine, so long as all of the work done in an iteration makes it into the branch, but if a critical production issue comes up, we have to ensure that the work somehow makes it into both branches. Lately, we've been having some "fun" with those branches. It's been an administrative headache, ensuring that all of the work makes it into every branch, and some bugs which have been fixed on one branch don't make it into master until someone points it out, which is concerning. I came across Git Flow a while back, and I feel that it would be a solution to our problem - code not percolating all the way to the release, or all the way back down. The only catch is that my lead stated that this sort of development was an anti-pattern - developing furiously for two weeks, then spending three to resolve the merge conflicts. I'm not entirely sure I agree, and since I brought it up, work has resumed like normal. Only recently have we had some major pain points with this. I'd like to know - why would this sort of development scheme be seen as an anti-pattern? Is it really an anti-pattern?
He's mostly referring to the feature branches side of the model. Feature branches were declared an anti-pattern a long time ago when the branches lasted for months and version control systems couldn't merge to save their life. Feature branches that last a week or two have much fewer issues, especially if you're continually merging from develop into the feature branch during that time. Anything much longer than that is still not recommended. Even if you don't use the feature branch side of git flow, the other parts are useful in ensuring you get clean merges and your changes are propagated in the right direction.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187512", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54997/" ] }
187,548
The very simple piece of C++ code below is incorrect, it's easy to see why and tools like Valgrind will tell you. In running several C++ codes containing this kind of error, I noticed that each time, it ended up with a Segmentation fault at the line which tries to use the address. So my question is: is it safe to claim that trying to use an address before it's allocated will inevitably lead to a Segmentation violation at the corresponding line? class ClassType { public:int data_; }; .... // Using address before it's allocated ClassType * ClassType_ptr; int x = ClassType_ptr->data_;
No, absolutely not . If that were what invariably happens, we could use that to our advantage and specify it in the standard; known behaviour, even if it is a crash, is virtually always better than unknown behaviour. But instead, the system response depends on details of the implementation, of the previous actions of the program, on the state of the runtime etc. in an unpredictable way. On modern operating systems referencing an address that doesn't belong to you may usually trigger a segmentation violation, but definitely not always. A segmentation violation might occur, but not until much later. Even worse, your program might appear to work but silently do the wrong thing. That's why the language standard has to take the worst possible option and declare that the behaviour is undefined . (Note that goodness depends on your viewpoint. For the compiler implementor, undefined behaviour is good because it means that whatever you do is, by definition, right. For the application programmer it is bad, because it leads to application errors, and even to the particularly insidious kind of errors where things sometimes work and then fail spectacularly at the least opportune moment.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187548", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80043/" ] }
187,613
I read that Liskov's substitution principle is violated if : Preconditions are strengthened, or Postconditions are weakened But I don't get fully yet how these two points would violate Liskov substitution principle. Can some one please explain with an example. Specifically, how would any one of the above conditions cause a situation where a subclass object can not be substituted for a superclass object?
Assume your baseclass works with a member int. Now your subtype requires that int to be positive. This is strengthened pre-conditions, and now any code that worked perfectly fine before with negative ints is broken. Likewise, assume the same scenario, but the base class used to guarantee that the member would be positive after being called. Then the subtype changes the behavior to allow negative ints. Code that works on the object (and assumes that the post-condition is a positive int) is now broken since the post-condition is not upheld. These are of course trivial examples, but the concept holds. Stuff like leaving a file/database connection open is an example of an eased post-condition that leads to issues.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187613", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60189/" ] }
187,702
I'm pretty new in our develepors team. I need some strong arguments and/or "pitfall" examples, so my boss will finally understand the advantages of Unobtrusive JavaScript, so that he, and the rest of the team, stops doing things like this: <input type="button" class="bow-chicka-wow-wow" onclick="send_some_ajax(); return false;" value="click me..." /> and <script type="text/javascript"> function send_some_ajax() { // bunch of code ... BUT using jQuery !!! } </script> I suggested using a pretty common pattern: <button id="ajaxer" type="button">click me...</button> and <script type="text/javascript"> // since #ajaxer is also delivered via ajax, I bind events to document // -> not the best practice but it's not the point.... $(document).on('click', '#ajaxer', function(ev) { var $elem = $(this); ev.preventDefault(); }); The reason why my boss (and others) do not want to use this approach is that the Event-Inspection in FireBug (or Chrome Dev Tools) isn't simple anymore, e.g. with <input type="text" name="somename" id="someid" onchange="performChange()"> he can immediately see what function executes on change-event and jump right to it in a huge JS file full of spaghetti-code . In the case of Unobtrusive JavaScript the only thing he would see is: <input type="text" name="somename" id="someid" /> and he has no idea whether some events, if any, were bound to this element and which function will be triggered. I was looking for a solution and found it: $(document).data('events') // or .. $(document).data('events').click but, this "approach" caused it to take "too long ..." to find out which function fires on which event, so I was told to stop bind events like that. I'm asking you for some examples or strong advantages or any other kind of suggestions for "Why we should use UJS" UPDATE: suggestion to "change the job" is not an ideal solution. UPDATE 2: Ok, I've not only suggested to use jQuery event-binding, I did so . After I wrote all Event-Delegation, the Boss came to me and asked me, why am I doing event delegation with different approach and approach he dind't know I mentioned some obvious benefits, like - There are 15 input-fields and all of then have an onchange event (not only, some of them have also onkeyup ) So it's more pragmatic to write this kind of event-delegation ones for ALL input-fields, instead of doing it 15 times, especially if all of the HTML will be rendered with PHP's echo -> echo '... <input type="text" id="someid" ... />...'
Stop using buzzwords and try making strong arguments instead. Even the Wikipedia page for Unobtrusive JavaScript says that the term isn't formally defined. It may be a blanket term for a number of good ideas, but if it sounds like a mere fad or fashion your boss and coworkers won't pay a lot of attention. Worse, if you keep going on about something they view as useless, they may start to discount completely unrelated ideas that you advocate. Figure out why you think the Unobtrusive JavaScript ideas are important. Is it because you read that they're considered best practices? Have you run into problems that you can attribute to not following these ideas? How much will adopting these ideas impact the company's bottom line? Get inside your boss's head. Why does he do things the way he does, and why has he rejected your changes? He's probably not stupid, and he's probably more experienced than you are, so he probably has some good reasons for doing things his way. You've already alluded to some of them -- follow that thread to its end. Then figure out if and how your suggestions will improve the things that he's most concerned about. Hint 1: Managers like money. The more directly you can tie your suggestions to increased income or reduced spending, the more impact your argument will have. Hint 2: Time is money. Hint 3: By itself, the aesthetic appeal of code doesn't make money. The opposite, in fact -- it takes time to make code look nice. Ugly code that works well is just fine with most managers. Don't just talk about it, do it . If you're lucky enough to have a small, self-contained project come your way, ask your manager if you can do it using the "UJS" style as a sort of demonstration. When you're ready, hold a code review where you can explain how it all works and why you think it's better than the current style. Idealism is great, but don't let it get in the way. It's more important to be seen as someone who gets things done than as a dilettante who complains about uncouth coding style. This is especially true if you're the new guy. If you can't get traction with UJS now, get some good work done instead and slowly build a case for the changes you'd like to make as you gain credibility. Read Switch: How to Change Things When Change is Hard . It'll give you a lot more good ideas about how to build your case effectively.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187702", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15680/" ] }
187,944
Specifying a suffix of Exception on exception classes feels like a code smell to me (Redundant information - the rest of the name implies an error state and it inherits from Exception). However, it also seems that everyone does it and it seems to be good practice. I am looking to understand why this is good practice. I have already seen and read the question why do exceptions usually have the suffix exception in the class name The question is for PHP and while the responses are probably valid for Java. Are there any other arguments or is it really as simple as explicitly differentiating them? If we take the examples from the previous question - could there really be classes in java with the name FileNoFound that is not an exception? If there could be, does it warrant suffixing it with Exception ? Looking at a quick hierarchy in eclipse of Exception , sure enough, the vast majority of them do have the suffix of exception, but there are a few exceptions. javassist is an example of a library that seems to have a few exceptions without the suffix - e.g. BadByteCode , BadHttpRequest etc. BouncyCastle is another lib with exceptions like CompileError I've googled around a bit as well with little info on the subject.
Landei's answer is a good one, but there's also the grammatical answer. Class names should be nouns . What is an "OutOfMemory"? What is a "FileNotFound"? If you think of "Exception" as the noun, then the descriptor is the adjective specifying it. It's not just any Exception , it's a FileNotFoundException . You shouldn't need to catch an OutOfMemory any more than you'd go to the store to buy a "blue". This also shows up if you read your code as a sentence: " Try doing ..., and catch OutOfMemory Exceptions "
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187944", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/78901/" ] }
187,963
I have heard of several situations of people using say, JavaScript or Python (or something), inside a program written in C#. When would using a language like JavaScript to do something in a C# program be better then just doing it in C#?
When you have behavior that you don't want to have to recompile the program in order to change. This is exactly why so many games use Lua as a scripting/modding language.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187963", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82158/" ] }
187,996
My understanding from small MVC applications is that you have the front end, which deals with HTML, JS, jQuery, etc, and you have the back end, which consists of your controllers and models. However, when I talk to developers from large companies, they often mention having a frontend tier and a backend tier. So sometimes, I might hear that they have a frontend with C# and a backend with Java. Why would any company want a backend and frontend in different languages? Does this help the large website scale better? When people say that their frontend is built in C#, does this mean that they are using a framework for the frontend (like .NET) and an additional framework on the backend (such as Spring)? Or does it mean something entirely different?
"Front-end" and "Back-end" can be nebulous terms, particularly in enterprise applications. "Front-end" can mean the UI, or it can be the entire application. "Back-end" can be used to mean the internals, or it could be the database or external services that are consumed. What the terms mean often depend entirely who are you talking to. So did you maybe ask "hey, what do you mean by that?" When you get into large enterprise development, you are going to have lots and lots of teams writing lots and lots of code. These teams will be developing in different languages, using different paradigms, from different locations. Some of this code will need to work together and much of it will not. I work for a large bank. My team develops our application in C#. All of it. But we consume web services that are largely written in Java, and those services talk to other services that talk to other services that get the account data to and from the appropriate data stores, and who knows what languages are used with those. Short version: People use the tools that get the job done.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/187996", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75878/" ] }
188,131
In an OS book I just read that, "Public APIs are forever: Only one chance to get it right". Is it true? Is it applicable only in APIs of Operating Systems or other APIs too? For example, will this be true for the APIs of Android Applications such as Tasker, Locale and Pushover?
It is generally true for any public API, yes. Once you expose an API to the public and people start to build applications that depend on that API, it becomes extremely difficult to change the API because doing so will break all those applications. That tends to be both a difficult technical problem and a difficult political problem. Of course, it is possible to change a public API. It does happen, for example, that projects will depricate an API in one release, introduce a new API, and then remove the old API in some future release. But that assumes that every (important) application that uses the old API will be rewritten to use the new API before the old API is removed. That often takes multiple years. And that means that the owner of the public API is imposing a cost on every other project that consumes the API. Since there are generally far more consumers of an API, those consumers tend to be a relatively powerful political lobby.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188131", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
188,299
I have a private method in my test class that constructs a commonly used Bar object. The Bar constructor calls someMethod() method in my mocked object: private @Mock Foo mockedObject; // My mocked object ... private Bar getBar() { Bar result = new Bar(mockedObject); // this calls mockedObject.someMethod() } In some of my test methods I want to check someMethod was also invoked by that particular test. Something like the following: @Test public void someTest() { Bar bar = getBar(); // do some things verify(mockedObject).someMethod(); // <--- will fail } This fails, because the mocked object had someMethod invoked twice. I don't want my test methods to care about the side effects of my getBar() method, so would it be reasonable to reset my mock object at the end of getBar() ? private Bar getBar() { Bar result = new Bar(mockedObject); // this calls mockedObject.someMethod() reset(mockedObject); // <-- is this OK? } I ask, because the documentation suggests resetting mock objects is generally indicative of bad tests. However, this feels OK to me. Alternative The alternative choice seems to be calling: verify(mockedObject, times(2)).someMethod(); which in my opinion forces each test to know about the expectations of getBar() , for no gain.
I believe this is one of the cases where using reset() is ok. The test you are writing is testing that "some things" triggers a single call to someMethod() . Writing the verify() statement with any different number of invocations can lead to confusion. atLeastOnce() allows for false positives, which is a bad thing as you want your tests to always be correct. times(2) prevents the false positive, but makes it seem like you are expecting two invocations rather than saying "i know the constructor adds one". Further more, if something changes in the constructor to add an extra call, the test now has a chance for a false positive. And removing the call would cause the test to fail because the test is now wrong instead of what is being tested is wrong. By using reset() in the helper method, you avoid both of these issues. However, you need to be careful that it will also reset any stubbing you have done, so be warned. The major reason reset() is discouraged is to prevent bar = mock(Bar.class); //do stuff verify(bar).someMethod(); reset(bar); //do other stuff verify(bar).someMethod2(); This is not what the OP is trying to do. The OP, I'm assuming, has a test that verifies the invocation in the constructor. For this test, the reset allows for isolating this single action and it's effect. This one of the few cases with reset() can be helpful as. The other options that don't use it all have cons. The fact that the OP made this post shows that he is thinking about the situation and not just blindly utilizing the reset method.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188299", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73646/" ] }
188,316
I've been reading a bit about Literate Programming recently, and it got me thinking... Well-written tests, especially BDD-style specs can do a better job at explaining what code does than prose does, and have the big advantage of verifying their own accuracy. I've never seen tests written inline with the code that they test. Is this just because languages don't tend to make it simple to separate application and test code when written in the same source file (and nobody's made it easy), or is there a more principled reason that people separate test code from application code?
The only advantage I can think of for inline tests would be reducing the number of files to be written. With modern IDEs this really isn't that big a deal. There are, however, a number of obvious drawbacks to inline testing: It violates separation of concerns . This may be debatable, but to me testing functionality is a different responsibility than implementing it. You'd either have to introduce new language features to distinguish between tests/implementation, or you'd risk blurring the line between the two. Larger source files are harder to work with: harder to read, harder to understand, you're more likely to have to deal with source control conflicts. I think it would make it harder to put your "tester" hat on, so to speak. If you're looking at the implementation details, you'll be more tempted to skip implementing certain tests.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188316", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82461/" ] }
188,381
Today I experienced a first in a technical interview. The candidate refused to use the whiteboard to solve an algorithm question, as I requested. There was no sort of disability at play or anything (outside of nervousness). He simply said that he is uncomfortable using a whiteboard for difficult questions. Oddly enough, we were able to work through it with me standing over his shoulder looking at his notepad. He even communicated his thoughts effectively enough for me to help him over the inevitable bumps. After this I asked him how he felt about collaborative work (as collaboration here is heavy) and he said he loves it. I asked him if he likes to get together with other developers to hash over problems on the whiteboard to which he said yes. Is this some sort of red flag or am I just reading too much into this? In our environment, collaboration is a must. ADDITIONAL DETAIL: The candidate was being evaluated for a lead development position, in which case he spends much of his time communicating with his developers and less time coding than an individual contributor.
I wouldn't be too concerned about it. You aren't hiring him to work on a whiteboard; you're hiring him to work at a keyboard. The whiteboard is an in-interview technique to help demonstrate his competence. If that doesn't work well for him, but he's able to demonstrate his competence in other ways, then that's an irrelevant implementation detail. From what you've written, he seems to be good at communicating and working through problems, and you noted that he was able to accomplish the required work on a notepad. This solves the same problem as the whiteboard does: it gives the candidate somewhere to work through the process more slowly than typing, and without a Backspace key, while the interviewer watches to get a feel for their thought process. From what's written here, I don't see any good reason not to hire him based on this.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188381", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39757/" ] }
188,404
My boss found out I'm not as smart as he thought. An example from my experience: I'm a junior programmer, and I work in a team of two, my boss (senior programmer) and myself. I was tasked with developing an internal web-application for the company we work at. I wrote the back-end to the front-end (the database design was in already place and the server technology had been chosen). He would periodically check on my progress by observing the web-application in action and was happy with out it was coming along. When I finished the web-app he was pleased with how well the end-product turned out. A few days ago he became interested in the code so I told him what technologies I used (for the front-end), and this is where it went south. For the front-end of the web-app I used a Javascript framework (Backbone.js). When asked why I would do such a thing. My response was because that I felt that the framework fit into this app quite well, and would help me structure the code better than if I wrote it from scratch...."Well, that's dis-heartening" was his response. So given this example my question is: If your a senior programmer and have lost confidence in the ability of your junior programmer, what would you like to see from your junior to gain the confidence back? EDIT : Thank you everyone for the great answers and supportive feedback!
If he liked the product you built, but is stuck up on your use of Backbone, you both need to have a conversation about the desired tech stack. As developers, we ought to use tools that are readily available, and consequently, smoothly move our flow of work. If he expected you to build the front-end from scratch, he should have been explicit and had good reason. The fact that he initially enjoyed the product is proof enough that you did well and are "smart" enough. tl;dr You did well. Talk to your senior and see what he expects from you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188404", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69063/" ] }
188,455
The definition of "C-Style language" can practically be simplified down to "uses curly braces ( {} )." Why do we use that particular character (and why not something more reasonable, like [] , which doesn't require the shift key at least on US keyboards)? Is there any actual benefit to programmer productivity that comes from these braces, or should new language designers look for alternatives (i.e. the guys behind Python)? Wikipedia tells us that C uses said braces, but not why. A statement in Wikipedia article on the List of C-based programming languages suggests that this syntax element is somewhat special: Broadly speaking, C-family languages are those that use C-like block syntax (including curly braces to begin and end the block)...
Two of the major influences to C were the Algol family of languages (Algol 60 and Algol 68) and BCPL (from which C takes its name). BCPL was the first curly bracket programming language, and the curly brackets survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) in place of the symbols { and }. The single-line '//' comments of BCPL, which were not taken up in C, reappeared in C++, and later in C99. From http://www.princeton.edu/~achaney/tmve/wiki100k/docs/BCPL.html BCPL introduced and implemented several innovations which became quite common elements in the design of later languages. Thus, it was the first curly bracket programming language (one using { } as block delimiters), and it was the first language to use // to mark inline comments. From http://progopedia.com/language/bcpl/ Within BCPL, one often sees curly braces, but not always. This was a limitation of the keyboards at the time. The characters $( and $) were lexicographically equivalent to { and } . Digraphs and trigraphs were maintained in C (though a different set for curly brace replacement - ??< and ??> ). The use of curly braces was further refined in B (which preceded C). From Users' Reference to B by Ken Thompson: /* The following function will print a non-negative number, n, to the base b, where 2<=b<=10, This routine uses the fact that in the ASCII character set, the digits 0 to 9 have sequential code values. */ printn(n,b) { extern putchar; auto a; if(a=n/b) /* assignment, not test for equality */ printn(a, b); /* recursive */ putchar(n%b + '0'); } There are indications that curly braces were used as short hand for begin and end within Algol. I remember that you also included them in the 256-character card code that you published in CACM, because I found it interesting that you proposed that they could be used in place of the Algol 'begin' and 'end' keywords, which is exactly how they were later used in the C language. From http://www.bobbemer.com/BRACES.HTM The use of square brackets (as a suggested replacement in the question) goes back even further. As mentioned, the Algol family influenced C. Within Algol 60 and 68 (C was written in 1972 and BCPL in 1966), the square bracket was used to designate an index into an array or matrix. BEGIN FILE F(KIND=REMOTE); EBCDIC ARRAY E[0:11]; REPLACE E BY "HELLO WORLD!"; WRITE(F, *, E); END. As programmers were already familiar with square brackets for arrays in Algol and BCPL, and curly braces for blocks in BCPL, there was little need or desire to change this when making another language. The updated question includes an addendum of productivity for curly brace usage and mentions python. There are some other resources that do this study though the answer boils down to "Its anecdotal, and what you are used to is what you are most productive with." Because of the widely varying skills in programming and familiarity with different languages, these become difficult to account for. See also: Stack Overflow Are there statistical studies that indicates that Python is “more productive”? Much of the gains would be dependent on the IDE (or lack of) that is used. In vi based editors, putting the cursor over one matching open/close and pressing % will then move the cursor to the other matching character. This is very efficient with C based languages back in the old days - less so now. A better comparison would be between {} and begin / end which was the options of the day (horizontal space was precious). Many Wirth languages were based on a begin and end style (Algol (mentioned above), pascal (many are familiar with), and the Modula family). I have difficulty finding any that isolate this specific language feature - at best I can do is show that the curly brace languages are much more popular than begin end languages and it is a common construct. As mentioned in Bob Bemer link above, the curly brace was used to make it easier to program as shorthand. From Why Pascal is Not My Favorite Programming Language C and Ratfor programmers find 'begin' and 'end' bulky compared to { and }. Which is about all that can be said - its familiarity and preference.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188455", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54164/" ] }
188,521
As a relatively new (self-taught) web developer, I've heard the terms front-end , client-side , back-end , and server-side quite often. To me, front-end and back-end were always synonymous with client-side and server-side, respectively. However, as I've begun working with MVC frameworks like CodeIgniter, I've come across a few instances of front-end referring to basically anything the end user sees (including server-side code), while back-end has referred to anything the end-user doesn't see (including CMSs). Client-side and server-side, to me, are much more concrete in their meanings; they have a very distinct line separating them. Front-end and back-end, on the other hand, do not. In a conversation I remember having with another web developer, he referred to CodeIgniter (in its entirety) as a front-end, and this threw me for a loop. I wasn't sure whether to correct him and say that CodeIgniter was my back-end, or if my definitions of the two terms were completely wrong. Searching for definitions of front- and back-end confused me a bit more in some respects, though they did clarify a few things. I'd just like to know where the lines are drawn between these four terms, and how they piece together in the context of web development (specifically on a LAMP stack).
I don't believe there is a formal definition for those terms, and as you noted there is overlap in some cases. front-end and client-side overlap. server-side and back-end also overlap. If I were to split hairs, I would offer these rough boundaries: client-side is an application that runs at the users' computer. It could be a stand-alone application (more often) or it could refer to a web browser based interface (less likely). front-end also faces the end-user and generally runs in a web browser based interface. I haven't heard of thick clients being referred to as a front-end . back-end refers to processes and services that are running either on another server or in the background of the users' computer. More often than not, it refers to processes that are not on the end users' computer. But the key, as you mentioned, is that the end user is not necessarily aware of the processes running. server-side is an extension of back-end but explicitly reinforces the fact that the processes are running somewhere else and not on the end users' computers. By way of example, and to highlight the confusion between the terms, I'll use Minecraft as an example. Minecraft has a client-side application when you run the jar files locally with your own JVM. front-end if you choose to run the client application in your web browser back-end process that can be running locally on your machine if you are in stand-alone mode server-side process if you choose to log into a server hosting the Minecraft server application. If you dig into some of Minecraft's statistics, you'll see that they simply designate a client and server component to the game; they don't necessarily care where those components are run. To directly answer your questions: Is the term 'Front-End' synonymous with 'Client-Side'? Sort of, but not really. There's a nuance between the terms if you are discussing things outside of the web based world. If you're strictly within the web based world, then yes, they are functionally synonymous. If so, is this always the case? In the web world, I would say yes. In other realms, I would say no as explained in the rough definitions I offered.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188521", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/70316/" ] }
188,609
I was recently discussing with some friends which of the following 2 methods is best to stub return results or calls to methods inside same class from methods inside same class. This is a very simplified example. In reality the functions are much more complex. Example: public class MyClass { public bool FunctionA() { return FunctionB() % 2 == 0; } protected int FunctionB() { return new Random().Next(); } } So to test this we have 2 methods. Method 1: Use Functions and Actions to replace functionality of the methods. Example: public class MyClass { public Func<int> FunctionB { get; set; } public MyClass() { FunctionB = FunctionBImpl; } public bool FunctionA() { return FunctionB() % 2 == 0; } protected int FunctionBImpl() { return new Random().Next(); } } [TestClass] public class MyClassTests { private MyClass _subject; [TestInitialize] public void Initialize() { _subject = new MyClass(); } [TestMethod] public void FunctionA_WhenNumberIsOdd_ReturnsTrue() { _subject.FunctionB = () => 1; var result = _subject.FunctionA(); Assert.IsFalse(result); } } Method 2: Make members virtual, derive class and in derived class use Functions and Actions to replace functionality Example: public class MyClass { public bool FunctionA() { return FunctionB() % 2 == 0; } protected virtual int FunctionB() { return new Random().Next(); } } public class TestableMyClass { public Func<int> FunctionBFunc { get; set; } public MyClass() { FunctionBFunc = base.FunctionB; } protected override int FunctionB() { return FunctionBFunc(); } } [TestClass] public class MyClassTests { private TestableMyClass _subject; [TestInitialize] public void Initialize() { _subject = new TestableMyClass(); } [TestMethod] public void FunctionA_WhenNumberIsOdd_ReturnsTrue() { _subject.FunctionBFunc = () => 1; var result = _subject.FunctionA(); Assert.IsFalse(result); } } I want to know wich is better and also WHY ? Update: NOTE: FunctionB can also be public
Edited following original poster update. Disclaimer : not a C# programmer (mostly Java or Ruby). My answer would be : I would not test it at all, and I do not think you should. The longer version is : private/protected methods are not parts of the API, they are basically implementation choices, that you can decide to review, update or throw away completely without any impact on the outside. I suppose you have a test on FunctionA(), which is the part of the class that is visible from the external world. It should be the only one that has a contract to implement (and that could be tested). Your private/protected method has no contract to fulfil and/or test. See a related discussion there : https://stackoverflow.com/questions/105007/should-i-test-private-methods-or-only-public-ones Following the comment , if FunctionB is public, I'll simply test both using unit test. You may think that the test of FunctionA is not totally "unit" (as it call FunctionB), but I would not be too worried by that : if FunctionB test works but not FunctionA test, it means clearly that the problem is not in the subdomain of FunctionB, which is good enough for me as a discriminator. If you really want to be able to totally separate the two tests, I would use some kind of mocking technique to mock FunctionB when testing FunctionA (typically, return a fixed known correct value). I lack the C# ecosystem knowledge to advice a specific mocking library, but you may look at this question .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188609", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82942/" ] }
188,721
Frequently, in my programming experience, I need to make a decision whether I should use float or double for my real numbers. Sometimes I go for float , sometimes I go for double , but really this feels more subjective. If I would be confronted to defend my decision, I would probably not give sound reasons. When do you use float and when do you use double ? Do you always use double , only when memory constraints are present you go for float ? Or you always use float unless the precision requirement requires you to use double ? Are there some substantial differences regarding the computational complexity of basic arithmetics between float and double ? What are the pros and cons of using float or double ? And have you even used long double ?
The default choice for a floating-point type should be double . This is also the type that you get with floating-point literals without a suffix or (in C) standard functions that operate on floating point numbers (e.g. exp , sin , etc.). float should only be used if you need to operate on a lot of floating-point numbers (think in the order of thousands or more) and analysis of the algorithm has shown that the reduced range and accuracy don't pose a problem. long double can be used if you need more range or accuracy than double , and if it provides this on your target platform. In summary, float and long double should be reserved for use by the specialists, with double for "every-day" use.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188721", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82767/" ] }
188,765
I have some C++ knowledge and know that pointers are commonly used there, but I've started to look at PHP open source code and I never see code using references in methods. Instead, the code always uses a return value instead of passing the reference to the variable to the method, which then changes that variable's value and just returns it. I have read that using references uses less memory, so why aren't they used in PHP?
Your assertion that references are rarely used is incorrect. As others have already mentioned there's a ton of native functions that use references, notable examples include the array sorting functions and preg_match() / preg_match_all() . If you are using any of these functions in your code, you are also using references. Moving on, references in PHP are not pointers. Since you're coming from a C++ background I can understand the confusion, but PHP references are an entirely different beast, they are aliases to a symbol table . Any performance gains you might have expected from C++ references simply don't apply to PHP references. In fact, in most scenarios passing by value is faster and less memory intensive than passing by reference. The Zend Engine, PHP's core, uses a copy-on-write optimization mechanism that does not create a copy of a variable until it is modified. Passing by reference usually breaks the copy-on-write pattern and requires a copy whether you modify the value or not. Don't be afraid to use references in PHP when you need to, but don't just do it as an attempt to micro-optimize. Remember, premature optimization is the root of all evil . Further reading: PHP references explained Objects and references In PHP (>= 5.0), is passing by reference faster?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188765", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/82811/" ] }
188,860
All over the internet, I see the following advice: A GET should never change data on the server- use a POST request for that What is the basis for this idea? If I make a php service which inserts data in the database, and pass it parameters in the GET query string, why is that wrong? (I am using prepared statements, to take care of SQL Injection). Is a POST request in some way more secure? Or is there some historic reason for this? If so how valid is this advice today?
This is not advice. A GET is defined in this way in the HTTP protocol . It is supposed to be idempotent and safe . As for why - a GET can be cached and in a browser, refreshed. Over and over and over. This means that if you make the same GET again, you will insert into your database again . Consider what this may mean if the GET becomes a link and it gets crawled by a search engine. You will have your database full of duplicate data. I also suggest reading URIs, Addressability, and the use of HTTP GET and POST . There is also a problem with link prefetching in some browsers - they will make a call to pre-fetch links, even if not indicated so by the page author. If, say, your log out is behind a "GET", linked from every page on your site, people can get logged out just due to this behaviour.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188860", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1426/" ] }
188,895
I'm tinkering with a query abstraction over WebSQL/Phonegap Database API, and I find myself both drawn to, and doubtful of, defining a fluent API that mimics the use of natural English language grammar. It might be easiest to explain this via examples. The following are all valid queries in my grammar, and comments explain the intended semantic: //find user where name equals "foo" or email starts with "foo@" find("user").where("name").equals("foo").and("email").startsWith("foo@") //find user where name equals "foo" or "bar" find("user").where("name").equals("foo").or("bar"); //find user where name equals "foo" or ends with "bar" find("user").where("name").equals("foo").or().endsWith("bar"); //find user where name equals or ends with "foo" find("user").where("name").equals().or().endsWith("foo"); //find user where name equals "foo" and email is not like "%contoso.com" find("user").where("name").equals("foo").and("email").is().not().like("%contoso.com"); //where name is not null find("user").where("name").is().not().null(); //find post where author is "foo" and id is in (1,2,3) find("post").where("author").is("foo").and("id").is().in(1, 2, 3); //find post where id is between 1 and 100 find("post").where("id").is().between(1).and(100); Edit based on Quentin Pradet's feedback : In addition it seems, the API would have to support both plural and singular verb forms, so: //a equals b find("post").where("foo").equals(1); //a and b (both) equal c find("post").where("foo").and("bar").equal(2); For the sake of question, let's presume that I haven't exhausted all possible constructs here. Let's also presume that I can cover most correct English sentences - after all, the grammar itself is limited to the verbs and conjuctions defined by SQL. Edit regarding grouping : One "sentence" is one group, and the precedence is as defined in SQL: left to right. Multiple groupings could be expressed with multiple where statements: //the conjunctive "and()" between where statements is optional find("post") .where("foo").and("bar").equal(2).and() .where("baz").isLessThan(5); As you can see, the definition of each method is dependent on the grammatical context it is in. For example the argument to "conjunction methods" or() and and() can either be left out, or refer to a field name or expected value. To me this feels very intuitive, but I would like you hear your feedback: is this a good, useful API, or should I backpedal to more straighforward implementation? For the record: this library will also provide a more conventional, non-fluent API based on configuration objects.
I think it's very wrong. I study natural langage and it's full of ambiguity that can only be resolved with context and a lot of human knowledge. The fact that programming languages are not ambiguous is a very good thing! I don't think you want meaning of methods to change according to context: This is adds more surprises since you bring ambiguity Your users will want to use constructions that you will not have covered, eg. find("user").where("name").and("email").equals("foo"); It's hard to report errors: what can you do with find("user").where("name").not().is().null(); ? Let's also presume that I can cover most correct English sentences - after all, the grammar itself is limited to the verbs and conjuctions defined by SQL. No, you can't cover most correct English sentences. Others have tried before, and it gets very complicated very quickly. It's called Natural language understanding but nobody really tries that: we're trying to solve smaller problems first. For your library, you basically have two options: either you restrict yourself to a subset of English: that gives you SQL, or you try to cover "English" and you find out that it's not possible due to the ambiguity, complexity and diversity of the language.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/188895", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9974/" ] }
189,191
I was reading this blog by Joel Spolsky about 12 steps to better code . The absence of Test Driven Development really surprised me. So I want to throw the question to the Gurus. Is TDD not really worth the effort?
Test driven development was virtually unknown before Kent Beck's book came out in 2002, two years after Joel wrote that post. The question then becomes why hasn't Joel updated his test, or if TDD had been better known in 2000 would he have included it among his criteria? I believe he wouldn't have, for the simple reason that the important thing is you have a well-defined process, not the specific details of that process. It's the same reason he recommends version control without specifying a specific version control system, or recommends having a bug database without recommending a specific brand. Good teams continually improve and adapt, and use tools and processes that are a good fit for their particular situation at that particular time. For some teams, that definitely means TDD. For other teams, not so much. If you do adopt TDD, make sure it's not out of a cargo cult mentality.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189191", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60189/" ] }
189,202
I've been hired by someone to do some small work on a site. It's a site for a large company. It contains very sensitive data, so security is very important. Upon analyzing the code, I've noticed it's filled with security holes - read, lots of PHP files throwing user get/post input directly into mysql requests and system commands. The problem is, the person who made the site for him is a programmer with family and children who depend on that job. I can't just say: "your site is a script kiddie amusement park. Let me redo it for you and you'll be fine." What would you do in this situation? Update: I followed some good advice here and politely reported to the developer that I've found some possible security flaws on the site. I pointed out the line and said there could be a possible vulnerability for SQL injection attacks there, and asked if he knew about it. He replied: "sure, but I think that to exploit it the attacker should have information on the structure of the database; I have to understand better" . Update 2: I said that's not always the case and suggested he follows this Stack Overflow question link in order to deal with it properly: How to prevent SQL injection in PHP? He said he would study it and thanked me for telling him before. I guess my part is done, thanks guys.
First and foremost here, the priority is to close the security holes. If you're working directly with the engineer who wrote this, document everything and give it to that engineer. If not, tell your employer the security issues are bigger than initially thought and that the site needs a lot of work. Ask to work with the main developer who's on the site, and offer to teach them about PHP security (don't promise to make the person an expert, but do offer to train them in everything you know) so that person can take it over after you're done. Don't make this a "this guy is bad, fire him" issue. Approach it from the perspective of "Hey, I found some potential bugs that need fixing stat, which seem to be coming from some ignorance/common misconceptions about site security. I'd also like to talk to your development so we can improve your site and hopefully avoid more of these issues in the future."
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189202", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54384/" ] }
189,222
Back in the late 90's I worked quite a bit with a code base that used exceptions as flow control. It implemented a finite state machine to drive telephony applications. Lately I am reminded of those days because I've been doing MVC web apps. They both have Controller s that decide where to go next and supply the data to the destination logic. User actions from the domain of an old-school telephone, like DTMF tones, became parameters to action methods, but instead of returning something like a ViewResult , they threw a StateTransitionException . I think the main difference was that action methods were void functions. I don't remember all the things I did with this fact but I've been hesitant to even go down the road of remembering much because since that job, like 15 years ago, I never saw this in production code at any other job . I assumed this was a sign that it was a so-called anti-pattern. Is this the case, and if so, why?
There's a detailed discussion of this on Ward's Wiki . Generally, the use of exceptions for control flow is an anti-pattern, with many notable situation - and language-specific (see for example Python ) cough exceptions cough . As a quick summary for why, generally, it's an anti-pattern: Exceptions are, in essence, sophisticated GOTO statements Programming with exceptions, therefore, leads to more difficult to read, and understand code Most languages have existing control structures designed to solve your problems without the use of exceptions Arguments for efficiency tend to be moot for modern compilers, which tend to optimize with the assumption that exceptions are not used for control flow. Read the discussion at Ward's wiki for much more in-depth information. See also a duplicate of this question, here
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189222", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20108/" ] }
189,274
We are considering to impose a single standard code format in our project (auto format with save actions in Eclipse). The reason is that currently there is a big difference in the code formats used by several (>10) developers which makes it harder for one developer to work on the code of another developer. The same Java file sometimes uses 3 different formats. So I believe the advantage is clear (readability => productivity) but would it be a good idea to impose this? And if not, why? UPDATE We all use Eclipse and everyone is aware of the plan. There already is a code format used by most but it is not enforced since some prefer to stick to their own code format. Because of the above reasons some would prefer to enforce it.
I currently work at a place where a standard code format is enforced and the code is automatically formatted when saving the file, just like you are about to do. As a new member of the company I found that the common formatting rules gave me a warm and fuzzy feeling that "these guys know what they are doing", so I couldn't be happier. ;) As a related side note, with the common formatting rules we also enforce certain, rather strict compiler warning settings in Eclipse, with most of them set to Error, many set to Warning, and almost none set to Ignore. I'd say there are two main reasons to enforce a single code format in a project. First has to do with version control: with everybody formatting the code identically, all changes in the files are guaranteed to be meaningful. No more just adding or removing a space here or there, let alone reformatting an entire file as a "side effect" of actually changing just a line or two. The second reason is that it kind of takes the programmers' egos out of the equation. With everybody formatting their code the same way, you can no longer as easily tell who has written what. The code becomes more anonymous and common property, so nobody needs to feel uneasy about changing "somebody else's" code. Those being the main reasons, there are others as well. I find it comforting that I don't have to bother myself with thinking about the code formatting, as Eclipse will do it for me automatically when I save. It's care-free, like writing documents with LaTeX: it's formatted afterwards and you don't have to worry about it while writing. I have also worked in projects where everybody has had their own styles. Then you have to think about stupid and meaningless issues such as if it's OK to modify somebody else's code in your own style, or if you should try to imitate their style instead. The only argument against common code formatting settings that I can think of for your case is that it's apparently an already ongoing project, so it will cause lots of unnecessary changes in all the files, messing up the actual file histories. The best case scenario is if you can start enforcing the settings right from the beginning of a project.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189274", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23347/" ] }
189,455
I have a website e-mail form. I use a custom CAPTCHA to prevent spam from robots. Despite this, I still get spam. Why? How do robots beat the CAPTCHA? Do they use some kind of advanced OCR or just get the solution from where it is stored? How can I prevent this? Should I change to another type of CAPTCHA? I am sure the e-mails are coming from the form, because it is sent from my email-sender that serves the form messages. Also the letter style is the same. For the record, I am using PHP + MySQL, but I'm not searching for a solution to this problem. I was interested in the general situation how the robots beat these technologies. I just told this situation as an example, so you can understand better what I'm asking about.
Two easiest ways to get through CAPTCHA: Use human farms, i.e. ask for people to fill CAPTCHAs for money, just like ProTypers does. Use an OCR. There may also be a bug either in the CAPTCHA mechanism itself or the surrounding application, allowing someone to bypass the CAPTCHA. By the way, the W3C article Inaccessibility of CAPTCHA : Alternatives to Visual Turing Tests on the Web explains as well how CAPTCHAs could be compromised: [...] One of the first documented attacks on the system was by a Carnegie Mellon student, who associated CAPTCHA images with access to an adult Web site, thus gaining free human labor to crack the authentication. [...] External projects [...] have shown methodologies and results indicating that many of the systems can be defeated by computers with between 88% and 100% accuracy, using optical character recognition. So how can you prevent those attacks? If you have your custom implemented CAPTCHA, you may try to move to a popular one, like reCAPTCHA . This will help if either your own CAPTCHA was too easy to OCR, or if there was a bug which was successfully exploited. If you use a popular CAPTCHA mechanism, moving to a custom-made one or to another popular one might prevent OCR. Technically, nothing would prevent human farms: you may create animated GIFs where several frames display different text very quickly, and only one frame is actually visible by the user, you may distort or bend text in all directions or find new, alternative ways to prevent OCRs from recognizing text, still humans paid for solving CAPTCHAs will successfully solve them. You may want to move from visual CAPTCHA to sound (if you're not using both already, and you should), but this means that users with hearing impairment would be unable to use your application. FrustratedWithFormsDesigner and GalacticCowboy mentioned in the comments domain-specific CAPTCHAs. I tried to find some material about how effective those are, but without success, so here is just my personal opinion: Domain-specific CAPTCHAs can be hugely annoying when actual users have no idea about the answer. Example: I'm visiting a page on a movies-oriented website. I notice a mistake in an article and want to comment on it to notify the author about the mistake. The comments form asks me, as a CAPTCHAs mechanism, to provide the name of the actress displayed on a photo. I have no idea who is this actress, so the only thing I can do is to leave the website (or spend the next two minutes using Google Images). Another example: a website asks to give a synonym of "mysterious". Easy as it sounds for a non-impaired person who speaks English fluently, it would be impossible to solve without external help for people who don't speak English well or people with some developmental disabilities, not counting the fact that finding synonyms or antonyms is always tricky. Most of those domain-specific problems can be solved programmatically. Both examples I gave are easily solved using external resources (Google Images and Synonyms dictionary). The one about transistors given as an example by FrustratedWithFormsDesigner is better, but still may be probably solved with a custom-made bot. None resist human farms. Either they generate data, just like ordinary text CAPTCHAs draw distorted characters, in which case the generation algorithm can be itself exploited to tune the bots, or they find data somewhere, just like reCAPTCHA takes text from scanned books , in which case the bot can use this data against it (for example, if you take words from a dictionary, asking the user to provide synonyms, the bot can use the very same dictionary to have a 100% success).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189455", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58357/" ] }
189,534
Quoted from MSDN about StackOverflowException : The exception that is thrown when the execution stack overflows because it contains too many nested method calls. Too many is pretty vague here. How do I know when too many is really too many? Thousands of function calls? Millions? I assume that it must be related in some way to the amount of memory in the computer but is it possible to come up with a roughly accurate order of magnitude? I'm concerned about this because I am developping a project which involves a heavy use of recursive structures and recursive function calls. I don't want the application to fail when I start using it for more than just small tests.
I'm concerned about this because I am developping a project which involves a heavy use of recursive structures and recursive function calls. I don't want the application to fail when I start using it for more than just small tests. Unless your language environment supports tail call optimization (and your recursion is a tail call), a basic rule of thumb is: recursion depth should be guaranteed to be O(log n), i.e. using algorithms or data structures based on divide-and-conquer (like trees, most sorting alogorithms, etc.) is OK, but anything linear (like recursive implementations of linked list handling) is not.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189534", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/38148/" ] }
189,542
Background I revisited an old (but great) site I had not been to for ages - the Alioth Language Shootout ( http://benchmarksgame.alioth.debian.org/ ). I started out programming in C/C++ several years ago, but have since then been working almost exclusively in Java due to language constraints in the projects I have been involved in. Not remembering the figures, I wanted to see, approximately, how well Java fared against C/C++ in terms of resource usage. The execution times were still relatively good, with Java at worst performing 4x slower than C/C++, but on average around (or below) 2x. Due to the nature of the implementation of Java itself, this was no surprise, and it's performance time was actually lower than what I expected. The real brick was the memory allocation - at worst, Java allocated: a whopping 52x more memory than C and 25x more than C++. 52x the memory ... Absolutely nasty, right? ... or is it? Memory is comparatively cheap now. Question: If we do not speak in terms of target platforms with strict limits on working memory (i.e. embedded systems and the like), should memory usage be a concern when picking a general purpose language today? I am asking in part because I am considering migrating to Scala as my primary language. I very much like the functional aspects of it, but from what I can see it is even more expensive in terms of memory than Java. However, since memory seems to be getting faster, cheaper and more plentiful by the year (it seems to be increasingly hard to find a consumer laptop without at least 4GB of DDR3 RAM), could it not be argued that resource management is becoming increasingly more irrelevant as compared to (possibly implementation-wise expensive) high-level language features which allow for faster construction of more readable solutions?
Memory management is utterly relevant since it governs how fast something appears even if that something has a great deal of memory. The best and most canonical example are AAA-title games like Call of Duty or Bioshock. These are effectively real-time applications that require massive amounts of control in terms of optimization and usage. It's not the usage per se that's the issue but rather the management. It comes down to two words: Garbage Collection. Garbage Collection algorithms can cause slight hiccups in performance or even cause the application to hang for a second or two. Mostly harmless in an accounting app but potentially ruinous in terms of user experience in a game of Call of Duty. Thus in applications where time matters, garbage collected languages can be hugely problematic. It's one of the design aims of Squirrel for instance, which seeks to remedy the issue that Lua has with its GC by using reference counting instead. Is it more of a headache? Sure but if you need precise control, you put up with it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189542", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/78847/" ] }
189,554
Backstory I graduated less than a year ago with a degree in Computer Science (with extra courses in software engineering), and another degree in Software Engineering. I'd like to think that I'm familiar with modern software development methodologies (CI, Scrum, XP, etc). The problem I got my first (and current) job at a medium sized game development company. The company has been around for several years, has multiple successful titles under its belt, and is full of talented and dedicated people. However, software development happens in an entirely ad hoc way. We use SVN and a bugtracker, but beyond that there's very little planning, no automated testing, and you'd be hard pressed to find a UML diagram anywhere in the office. I can think of two possible reasons for this: Programmers are simply not given enough time to do things properly. Sadly, upper management has a tendency to set deadlines without consulting the programmers, which has led to many an all-nighter. The programmers themselves lack the education / motivation to employ proper techniques. The question I recently spoke to management about this, and I was encouraged to suggest alternatives. I figure the first thing to do would be to rule out the second possibility listed above: that is, give management some evidence that if the programmers were given more breathing space, they could become much more productive. However, I'm not quite sure how to go about that. My best idea so far is to conduct a survey among programmers about what software development methodologies they know, as well as their attitudes towards code quality and such. tl;dr How can I assess whether our programmers are able/willing to employ modern software development methods, and prove to management that this would lead to increased productivity? EDIT I obviously failed to bring my point across properly, so let me elaborate on the situation a little. First , I am fully aware that this looks like a bright-eyed fresh grad trying to save the world with his l33t education and changing the wicked ways of those grumpy old fossils who just don't know any better. I also understand that there is always a difference between theory and practice, and what they teach you in school is never what happens in real life. That is exactly why I'm being incredibly cautious here, trying to get a feel for the current situation instead of charging in going "Hai guise, let's all do teh unit tests from now on!" Second , @psr raises a very good point. Let me name some of the reasons why I think that there is room for improvement: Our senior developer, who's a bit of a maverick but ultimately a skilled and reasonable guy, agrees that at this point in the company's history, we should have more sophisticated and repeatable development processes in place. One of the programmers who's much more experienced than me and has used modern methods agrees with me that we could be more productive if we used some of them. In some cases, throwing features together during an all-nighter and then hunting down all the bugs takes a ridiculous amount of time; I cannot imagine that doing things any other way could be any worse. The lack of planning, design, refactoring and documentation leads to poor code quality. Perhaps more importantly, it leads to programmers forgetting how their own code works in a matter of months (while they're still working on the same project!), and no way to find out apart from trial and error or wading through the spaghetti. Even management suffers , as the only quantifiable measure of progress in a project is the number of unresolved bugs. But since anything is liable to break at any time (due to the reasons above), this is a very unreliable figure. Third , I don't actually believe that my colleagues lack the knowledge of modern methods or the willingness to use them, it's just that at this point, I cannot rule it out as a possibility. Which is kind of the reason why I came here asking what's the best way to find out what they know (besides lengthy personal interviews, obviously). Fourth , don't take those comments about UML and automated testing too seriously, they were just examples. The fact is, code is produced on the immediate whims of designers/upper management without any kind of software design, and only ever tested by a couple guys pressing buttons to see if anything breaks. Perhaps I'm still wearing my university-issued rose tinted glasses, but that strikes me as garage-development at its finest. tl;dr I'm trying really hard not to be arrogant about this, but both my fellow programmers and management seem to be expressing a desire for more predictable and productive ways of developing software. How can I find a solution that could make for an easy transition?
Have you spoken to your development colleagues about this? How do you know they lack education? That's quite a sweeping statement and you'll probably find you're wrong. I don't think it'd go down too well if a new grad started meddling with processes without understanding why they're like that in the first place. Managers love processes and love tracking and love being made look good, so the battle-hardened skeptic in me feels that your manager is interested in your views because it's something they can take credit for. Also, be aware that software development methods in text-books don't always map out like that in the Real World. They're certainly not one-size-fits-all, and can look at the world through rose-tinted spectacles. UML is a load of crap, not having that around isn't that big of a deal.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189554", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10178/" ] }
189,805
I've been struggling with an increasingly annoying problem regarding our unit tests that we are implementing in my team. We are attempting to add unit tests into legacy code that wasn't well designed and while we haven't had any difficulty with the actual addition of the tests we are starting to struggle with how the tests are turning out. As an example of the problem let's say you have a method that calls 5 other methods as part of its execution. A test for this method might be to confirm that a behavior occurs as a result of one of these 5 other methods being called. So, because a unit test should fail for one reason and one reason only, you want to eliminate potential issues caused by calling these other 4 methods and mock them out. Great! The unit test executes, the mocked methods are ignored (and their behavior can be confirmed as part of other unit tests), and the verification works. But there's a new problem - the unit test has intimate knowledge of how you confirmed that behavior and any signature changes to any of those other 4 methods in the future, or any new methods that need to be added to the 'parent method', will result in having to change the unit test to avoid possible failures. Naturally the problem could be mitigated somewhat by simply having more methods accomplish less behaviors but I was hoping there was perhaps a more elegant solution available. Here's an example unit test that captures the problem. As a quick note 'MergeTests' is a unit testing class that inherits from the class we are testing and overrides behavior as needed. This is a 'pattern' we employ in our tests to allow us to override calls to external classes / dependencies. [TestMethod] public void VerifyMergeStopsSpinner() { var mockViewModel = new Mock<MergeTests> { CallBase = true }; var mockMergeInfo = new MergeInfo(Mock.Of<IClaim>(), Mock.Of<IClaim>(), It.IsAny<bool>()); mockViewModel.Setup(m => m.ClaimView).Returns(Mock.Of<IClaimView>); mockViewModel.Setup( m => m.TryMergeClaims(It.IsAny<Func<bool>>(), It.IsAny<IClaim>(), It.IsAny<IClaim>(), It.IsAny<bool>(), It.IsAny<bool>())); mockViewModel.Setup(m => m.GetSourceClaimAndTargetClaimByMergeState(It.IsAny<MergeState>())).Returns(mockMergeInfo); mockViewModel.Setup(m => m.SwitchToOverviewTab()); mockViewModel.Setup(m => m.IncrementSaveRequiredNotification()); mockViewModel.Setup(m => m.OnValidateAndSaveAll(It.IsAny<object>())); mockViewModel.Setup(m => m.ProcessPendingActions(It.IsAny<string>())); mockViewModel.Object.OnMerge(It.IsAny<MergeState>()); mockViewModel.Verify(mvm => mvm.StopSpinner(), Times.Once()); } How have the rest of you dealt with this or is there no great 'simple' way of handling it? Update - I appreciate everyone's feedback. Unfortunately, and it's no surprise really, there doesn't seem to be a great solution, pattern, or practice one can follow in unit testing if the code being tested is poor. I marked the answer that best captured this simple truth.
Fix the code to be better designed. If your tests have these issues, then your code will have worse issues when you try to change things. If you can't, then perhaps you need to be less ideal. Test against the pre and post-conditions of the method. Who cares if you're using the other 5 methods? They presumably have their own unit tests making it clear(er) what caused the failure when the tests fail. "unit tests should have only one reason to fail" is a good guideline, but in my experience, impractical. Hard to write tests don't get written. Fragile tests don't get believed.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/189805", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/83657/" ] }
190,096
I wrote the following code: if (boutique == null) { boutique = new Boutique(); boutique.setSite(site); boutique.setUrlLogo(CmsProperties.URL_FLUX_BOUTIQUE+fluxBoutique.getLogo()); boutique.setUrlBoutique(CmsProperties.URL_FLUX_BOUTIQUE+fluxBoutique.getUrl()); boutique.setNom(fluxBoutique.getNom()); boutique.setSelected(false); boutique.setIdWebSC(fluxBoutique.getId()); boutique.setDateModification(new Date()); boutiqueDao.persist(boutique); } else { boutique.setSite(site); boutique.setUrlLogo(CmsProperties.URL_FLUX_BOUTIQUE+fluxBoutique.getLogo()); boutique.setUrlBoutique(CmsProperties.URL_FLUX_BOUTIQUE+fluxBoutique.getUrl()); boutique.setNom(fluxBoutique.getNom()); //boutique.setSelected(false); boutique.setIdWebSC(fluxBoutique.getId()); boutique.setDateModification(new Date()); boutiqueDao.merge(boutique); } There is a commented-out line here. But I think it makes the code clearer, by making obvious what the difference is between if and else . The difference is even more noticeable with color highlighting . Can commenting out code like this ever be a good idea?
The biggest problem with this code is that you duplicated those 6 lines. Once you eliminate that duplication, that comment is useless. If you create a boutiqueDao.mergeOrPersist method you can rewrite this as: if (boutique == null) { boutique = new Boutique(); boutique.setSelected(false); } boutique.setSite(site); boutique.setUrlLogo(CmsProperties.URL_FLUX_BOUTIQUE+fluxBoutique.getLogo()); boutique.setUrlBoutique(CmsProperties.URL_FLUX_BOUTIQUE+fluxBoutique.getUrl()); boutique.setNom(fluxBoutique.getNom()); boutique.setIdWebSC(fluxBoutique.getId()); boutique.setDateModification(new Date()); boutiqueDao.mergeOrPersist(boutique); Code that either creates or updates a certain object is common, so you should solve it once, for example by creating a mergeOrPersist method. You certainly should not duplicate all the assignment code for those two cases. Many ORMs have built in support for this in some way. For example they might create a new row if the id is zero, and update an existing row if the id is not zero. The exact form depends on the ORM in question, and since I'm not familiar with the technology you're using, I can't help you with that. If you don't want to create a mergeOrPersist method, you should eliminate the duplication in some other way, for example by introducing a isNewBoutique flag. That may not be pretty, but it's still much better than duplicating the whole assignment logic. bool isNewBoutique = boutique == null; if (isNewBoutique) { boutique = new Boutique(); boutique.setSelected(false); } boutique.setSite(site); boutique.setUrlLogo(CmsProperties.URL_FLUX_BOUTIQUE + fluxBoutique.getLogo()); boutique.setUrlBoutique(CmsProperties.URL_FLUX_BOUTIQUE + fluxBoutique.getUrl()); boutique.setNom(fluxBoutique.getNom()); boutique.setIdWebSC(fluxBoutique.getId()); boutique.setDateModification(new Date()); if (isNewBoutique) boutiqueDao.persist(boutique); else boutiqueDao.merge(boutique);
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190096", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8033/" ] }
190,120
When designing a system I am often faced with the problem of having a bunch of modules (logging, database acces, etc) being used by the other modules. The question is, how do I go about providing these components to other components. Two answers seem possible dependency injection or using the factory pattern. However both seem wrong: Factories make testing a pain and don't allow easy swapping of implementations. They also don't make dependencies apparent (e.g. you're examining a method, oblivious to the fact that it calls a method that calls a method that calls a method that uses a database). Dependecy injection massively swells constructor argument lists and it smears some aspects all over your code. Typical situation is where constructors of more than half classes look like this (....., LoggingProvider l, DbSessionProvider db, ExceptionFactory d, UserSession sess, Descriptions d) Here's a typical situation I have a problem with: I have exception classes, which use error descriptions loaded from the database, using a query which has parameter of user language setting, which is in user session object. So to create a new Exception I need a description, which requires a database session and the user session. So I'm doomed to dragging all these objects across all my methods just in case I might need to throw an exception. How do I tackle such a problem??
Use dependency injection, but whenever your constructor argument lists become too big, refactor it using a Facade Service . The idea is to group some of the constructor arguments together, introducing a new abstraction. For example, you could introduce a new type SessionEnvironment encapsulating a DBSessionProvider , the UserSession and the loaded Descriptions . To know which abstractions make sense most, however, one has to know the details of your program. A similar question was already asked here on SO .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190120", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33026/" ] }
190,265
I wrote an application that helps you to save energy. Actually it is very simple. I check the current location of the phone and I make some changes to the configuration like "sound off, dark display, wifi off...", depending on the location of the user. Sony just released a new phone including one of my apps features (actually they have an extra entry in the options menue for this). I have no idea whether there is a patent for this function. Can I even release this app without risking to be sued some day? I'm very confused about the whole "patent" situation. I'm about 20 years old and I can't even write a simple app, without investing lots of money for a lawyer. Edit: I don't ask for legal advice . I wanted to receive an overview about how developers see or handle the whole situation.
I am not a lawyer. There's a special word for people who take anonymous legal advice from the Internet - "fool". Do a risk analysis - a) you don't write the software. outcome: Nothing. b) you do write the software. outcome #1: Sony doesn't notice and/or doesn't care. This might be a case of the "shallow pockets" defense - in their eyes you're not worth the effort to sue. outcome #2: Sony sees your work and is impressed by it. They may offer to purchase it from you. (That might be cheaper than suing you then coding it up themselves) outcome #3: Sony sees your work as an infringement that they must prevent. Step #1 (in the US, not sure about other countries) would be to send you a "cease & desist" letter. It's cheaper than suing, and chances are it's all they'd need to get you to stop. Outcomes #1 and #2 are not harmful to you. Outcome #3 would probably mean you'd simply have to stop selling your software. You have to decide how likely #3 is and whether or not you can afford to stop. My recommendation is to go ahead and write the software. You have much to gain (experience, reputation, possibly money) and little to lose. It's difficult to get the attention of large companies when you actually want it, so I'd expect outcome #1. Good luck!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190265", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/84063/" ] }
190,267
In java.util.PriorityQueue we have the methods add(E e) and offer(E e) . Both methods are documented as: Inserts the specified element into this priority queue. What are the differences between these two methods?
The difference is that offer() will return false if it fails to insert the element on a size restricted Queue , whereas add() will throw an IllegalStateException . You should use offer() when failure to insert an element would be normal, and add() when failure would be an exceptional occurrence (that needs to be handled).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190267", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/75548/" ] }
190,273
I've heard the statement that Python would be too slow to be of any use in browsers. I reckon Javascript is only superior in this aspect because of companies like Google who need it fast (and made it fast) because they need it to survive, but I could be wrong. Are there any differences in how Python and Javascript are designed that have an impact on how they (would) perform in browsers? Since as of now there isn't a client side Python implementation, my question comes from the statement someone made, so maybe it has something to do with the languages itself (although I don’t believe that).
To start with, we must make a clear distinction between languages and implementations . A language is an abstract thing, the implementation is a concrete thing that can have performance measured. For example, Lisp was once considered far too inefficient for practical use but compilers kept maturing and, eventually, dedicated hardware was developed for it; at one point in the 1980s it was the development platform of choice for high performance workstation development. That said, the simplest answer is that a fast Javascript implementation like Google's V8 blows the standard implementation of Python (CPython) out of the water . V8 is a highly optimized virtual machine with a JITer that is amazingly fast while CPython is a fairly simple VM in comparison. There's an implementation of Python with a JIT but that is still only about 5-6x faster. Five years ago it would have been a different story. Browsers had simplistic Javascript implementations because speed wasn't a concern since nobody built 'real' software with it and Python would have been equal, if not faster.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190273", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/73649/" ] }
190,311
In a condition statement (IF) everyone uses (position < size) , but why? Only convention or there is a good reason for that? Found in the wild: if (pos < array.length) { // do some with array[pos]; } Rarely found: if (array.length > pos) { // do some with array[pos]; }
The deeper pattern is that we naturally use "[thing that varies] [comparison] [thing that does not vary]" as the standard order. This principle holds true for your example because position may vary, while size will not. The only common exception is when testing for equality some programmers train themselves to use the opposite order (known as Yoda conditions ) in order to avoid the common variable = constant rather than variable == constant bug -- I don't hold with this idea because I find the natural ordering described above much more readable, primarily because it's the way we express the idea in English, and because most modern compilers will detect this and issue a warning.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190311", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72317/" ] }
190,482
Instead of a database I just serialize my data to JSON, saving and loading it to disk when necessary. All the data management is made on the program itself, which is faster AND easier than using SQL queries. For that reason I have never understood why databases are necessary at all. Why should one use a database instead of just saving the data to disk?
You can query data in a database (ask it questions). You can look up data from a database relatively rapidly. You can relate data from two different tables together using JOINs. You can create meaningful reports from data in a database. Your data has a built-in structure to it. Information of a given type is always stored only once. Databases are ACID . Databases are fault-tolerant. Databases can handle very large data sets. Databases are concurrent; multiple users can use them at the same time without corrupting the data. Databases scale well. In short, you benefit from a wide range of well-known, proven technologies developed over many years by a wide variety of very smart people. If you're worried that a database is overkill, check out SQLite.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190482", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54384/" ] }
190,485
I am new, so forgive me if my question is mistaken or anything, just give me an alert and I'll be glad to fix it. Me and my team is about to develop a system where the database is located in a private server, and the applications is distributed between clients on distant location. It is actually a simple CRUD application, but our primary concern is how poor the internet connection on some remote clients. I thought to give a try to WCF queue message so that when the connection is down, I can save the message to later send it again when it is up, but I don't have a real clear solution in mind. Currently, my solution would be something like this : CLIENT MySolution.sln -- MyDataAccess ---- Entities (Contains my object class definitions and properties) ---- Repositories (Handles database communication) ---- Services (Handles message queue to server CRUD) -- MyClassLibraries (Contains third party's DLLs) -- MyHelper (Contains helper classes and functions) -- MyCore (Main application project) ---- Model ---- View ---- ViewModel and for the server : SERVER -- MyBackgroundService (Responsible for fetching incoming message and doing CRUD to database) Is there a better solution to this? I am not good enough yet to see it. Please let me know if I violate some rules here. Cheers !
You can query data in a database (ask it questions). You can look up data from a database relatively rapidly. You can relate data from two different tables together using JOINs. You can create meaningful reports from data in a database. Your data has a built-in structure to it. Information of a given type is always stored only once. Databases are ACID . Databases are fault-tolerant. Databases can handle very large data sets. Databases are concurrent; multiple users can use them at the same time without corrupting the data. Databases scale well. In short, you benefit from a wide range of well-known, proven technologies developed over many years by a wide variety of very smart people. If you're worried that a database is overkill, check out SQLite.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190485", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/84271/" ] }
190,649
Recently I had a discussion with a colleague regarding code style. He was arguing that your usage of APIs and the general patterns you are using should be as similar as possible with the surrounding code, if not with the the codebase as a whole, just as you would with code appearance (brace positioning, capitalisation etc). For example if I were adding a method to a DAO class in C# I would try to use LINQ where appropriate to help make my code clean and easy to maintain, even if none of the other methods in that class were using it. However, my colleague would argue that I should not use it in that instance because it would be against the existing style of that class and thus harder to understand. At first I found his position rather extreme, but after thinking it over for a while I am beginning to see his point. With the hypothetical LINQ example, perhaps this class doesn't contain it because my colleagues are unfamiliar with LINQ? If so, wouldn't my code be more maintainable for my fellow developers if I didn't use it? On the other hand, if I truly believe that using such a technique would result in cleaner code, then shouldn't I use it even if it differs drastically from the surrounding code? I think that the crux of my colleague's argument is that if we all go about implementing similar functionality in a codebase in different ways, and we each think that our way is "best", then in the end the code as a whole just gets harder to understand. However at the moment I still think that if we blindly follow the existing code too much then the quality will just slowly rot over time. So, to what extent are patterns part of code style, and where should we draw the line between staying consistent and making improvements?
To give a more general answer: In a case like this, you have two programming "best practices" that are opposed to each other: code consistency is important, but so is choosing the best possible method to accomplish your task. There is no one correct answer to this dilemma; it depends on a couple factors: How beneficial is the "correct" way? Sometimes the new and improved best practice will dramatically increase performance, eliminate bugs, be far easier to program, etc. In such a case, I would lean heavily toward using the new method. On the other hand , the "correct way" may be little more than syntactic sugar, or an agreed idiomatic method of doing something that is not actually superior. In that case, code consistency is probably more important. How big of a problem would inconsistency create? How interconnected is the new code with legacy code? Is your new code part of a library? Does it create an object that gets passed to many parts of the program? In cases like these, consistency is very important. Using a new API, or a new way of doing things in general, might create subtly different results that break assumptions elsewhere in your program. On the other hand , if you are writing a fairly isolated piece of code, inconsistency is less likely to be a problem. How large and how mature is your code base? How many developers need to understand it and work on it? Agreed-upon, consistent standards are much more important for larger projects. Does the code need to run in older environments that may not support the latest features? Based on the balance of these issues, you have to make the right choice about which route to take. I personally see little value in consistency for consistency's sake, and would prefer to use the latest, best methods unless there is a significant cost to do so. Of course, there is a third option: rewriting the existing code so that it uses the best methods and is consistent. There are times when this is necessary, but it comes with a high cost.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190649", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57177/" ] }
190,687
The string type is immutable. We can use the const keyword with strings in high level language like .NET. My understanding of 'const' means constant (it remains the same, we can't change the value). Are strings not always constant (IMO the term constant should not be applicable in the same context if the type has to be recreated each time it means for the values life time, it was constant)? In high level languages, specifically .NET (although I'd be interested in Java too), is this due to general memory management/tracking of objects or is there another reason?
You are confusing two different things: Immutable means the object's memory contents cannot be modified. When you modify an immutable object (e.g, a string ), the memory contents of this object are not modified. Instead: A new block of memory is allocated. The contents of the object you (tried to) modify is copied to this new block, with the part you wanted to change is changed in this new block. The pointer (i.e, the reference) is assigned to this new block. Constant means the variable cannot be modified at compile-time. Whether a string or an integer the contents of the variable (or what it points to) cannot be changed or assigned at compile time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190687", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72130/" ] }
190,716
C++ has a feature (I cannot figure out the proper name of it), that automatically calls matching constructors of parameter types if the argument types are not the expected ones. A very basic example of this is calling a function that expects a std::string with a const char* argument. The compiler will automatically generate code to invoke the appropriate std::string constructor. I'm wondering, is it as bad for readability as I think it is? Here's an example: class Texture { public: Texture(const std::string& imageFile); }; class Renderer { public: void Draw(const Texture& texture); }; Renderer renderer; std::string path = "foo.png"; renderer.Draw(path); Is that just fine? Or does it go too far? If I shouldn't do it, can I somehow make Clang or GCC warn about it?
This is referred to as a converting constructor (or sometimes implicit constructor or implicit conversion). I'm not aware of a compile-time switch to warn when this occurs, but it's very easy to prevent; just use the explicit keyword. class Texture { public: explicit Texture(const std::string& imageFile); }; As to whether or not converting constructors are a good idea: It depends. Circumstances in which implicit conversion makes sense: The class is cheap enough to construct that you don't care if it's implicitly constructed. Some classes are conceptually similar to their arguments (such as std::string reflecting the same concept as the const char * it can implicitly convert from), so implicit conversion makes sense. Some classes become a lot more unpleasant to use if implicit conversion is disabled. (Think of having to explicitly invoke std::string every time you want to pass a string literal. Parts of Boost are similar.) Circumstances in which implicit conversion makes less sense: Construction is expensive (such as your Texture example, which requires loading and parsing a graphic file). Classes are conceptually very dissimilar to their arguments. Consider, for example, an array-like container that takes its size as an argument: class FlagList { FlagList(int initial_size); }; void SetFlags(const FlagList& flag_list); int main() { // Now this compiles, even though it's not at all obvious // what it's doing. SetFlags(42); } Construction may have unwanted side effects. For example, an AnsiString class should not implicitly construct from a UnicodeString , since the Unicode-to-ANSI conversion may lose information. Further reading: The C++ FAQ on explicit constructors The Google C++ Style Guide says to nearly always use explicit constructors. This StackOverflow question goes into more pros and cons.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190716", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53493/" ] }
190,719
What is the difference between the terms concurrent and parallel execution? I've never quite been able to grasp the distinction. The tag defines concurrency as a manner of running two processes simultaneously, but I thought parallelism was exactly the same thing, i.e.: separate threads or processes which can potentially be run on separate processors. Also, if we consider something like asynchronous I/O, are we dealing with concurrency or parallelism?
Concurrency and parallelism are two related but distinct concepts. Concurrency means, essentially, that task A and task B both need to happen independently of each other, and A starts running, and then B starts before A is finished. There are various different ways of accomplishing concurrency. One of them is parallelism--having multiple CPUs working on the different tasks at the same time. But that's not the only way. Another is by task switching, which works like this: Task A works up to a certain point, then the CPU working on it stops and switches over to task B, works on it for a while, and then switches back to task A. If the time slices are small enough, it may appear to the user that both things are being run in parallel, even though they're actually being processed in serial by a multitasking CPU.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190719", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61277/" ] }
190,746
I find myself writing (hopefully) helpful comments in code (C++) documentation of the type: The reason we are doing this is... The reason I use "we" instead of "I" is because I do a lot of academic writing where "we" is often preferred. So here's the question. Is there a good reason to prefer one over the other in documenting code: Use "We": The reason we are doing this is... Use "I": The reason I am doing this is... Use my name: The reason [my name] did this is... Passive voice: The reason this was done is... Neither: Do this because... I choose #1 because I'm used to writing that way, but documentation is not for the writer, it's for the reader, so I'm wondering if adding the developer name is helpful or if that just adds yet another thing that needs to be changed when maintaining the code.
I'd go with: #6. Declarative: ... Rather than say "The reason this was done is because each Foo must have a Bar", just say "Each Foo must have a Bar". Make the comment into an active statement of the reason, rather than a passive one. It's generally a better writing style overall, better fits the nature of code (which does something), and the the reason this was done phrase adds no information whatsoever. It also avoids exactly the problem you're encountering.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190746", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25589/" ] }
190,770
I am most of the way through my games programming degree. This is not a computer science degree, so a lot of the theory is eschewed in favour of practical portfolio building and what I see as JIT learning, which is apparently more important in the games industry. The first subject was "Introduction to Object-Oriented Programming". That phrase didn't bother me until I learned about the different programming paradigms (I'm getting this list from https://en.wikipedia.org/wiki/Comparison_of_programming_paradigms ): Imperative Functional Procedural Structured Event-Driven Object-Oriented Declarative Automata-Based I get that this is not an exhaustive list, and that not all of these concepts are equal, and most of them aren't even exclusive, but I don't understand why most of them get just one word - imperative; functional; declarative - but when we talk about programming with objects, we have to clarify that we are oriented around those objects. Can't we just use objects? Can't we just have objects? Why must they orient us, as our guiding star? Looking here ( https://en.wikipedia.org/wiki/Object-oriented_programming ), nowhere is the use of the term "oriented" addressed as its own term. Only "object" is explained. Also, I can see for practical reasons why Event-Driven is used, because Event Programming is already a thing that you do when you're running a conference, and Automata Programming makes it sound like you're setting up a robotic production line, so it helps to have additional clarifying words there. What makes Object Programming, as a phrase, not enough to describe what we do when we use objects in our programming? Obviously from my tone I'm not too fond of the word "oriented". It reminds me of my time as a court reporter, listening to lawyer after lawyer use the phrase "in relation to" as a kind of verbal tick. It didn't mean anything; it was just a term that they used to fill the air while they tried to think of what to say next. However, I'm not trying to advocate a change of language, I'm just asking why it is the way it is. If someone knows why it came to be known that way for purely historical, vestigial reasons, then that's the answer. It will be ammunition if I ever decide to waste my time advocating for a change of language. On the other hand, if there is actually a useful reason for why a language or piece of code must point towards objects, to the exclusion of all other directions, as opposed to merely having them in its toolbelt, as tools , I would really be interested to learn about it. I like learning useful things.
I believe you're reading way too much into a simple grammatical construct. Take a look at your list of paradigms, sorted differently for a reason we will get to shortly: Imperative Functional Procedural Structured Declarative Event-Driven Automata-Based Object-Oriented What do the words all have in common? They're all adjectives because they're intended to modify the word "programming". Furthermore, with the exception of "imperative" they're all not "natural" adjectives, but "adjectified" nouns - nouns that actually describe the core of the paradigm: function, structure, automata, and object. And there are two different ways in which the nouns are adjectified: through a suffix like -al or -ed, or through creating a composite word using a hyphen. Now, as Doc Brown has pointed out, the suffixes that could be used to adjectify "object" result in a different meaning. Which leaves composition. And I submit that it is pure coincidence or taste that Alan Kay chose to use "oriented" for his composite adjective "object-oriented". It could just as well have been "object-driven", or "object-based", and you might read too much into those as well. Doesn't "driven" sound like some sort of unhealthy obsession?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190770", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/84511/" ] }
190,797
I've been programming for a little under a year and have some experience writing systems applications, web apps, and scripts for businesses/organizations. However, one thing I've never really done is working with a framework like Django, Rails or Zend. Looking over the Django framework, I'm a little frustrated with how much is abstracted away in frameworks. I understand the core goals of DRY and minimal code, but some of this over-reliance on different modules and heavy abstraction of core functions feels like it: Makes programs get dated really fast because of the ever-changing nature of modules/frameworks, Makes code hard to understand because of the plethora of frameworks and modules available and all of their idiosyncrasies, Makes code less logical unless you've read all of the documentation; i.e., I can read through some list comprehensions and conditional logic and figure out what a program is doing, but when you see functions that require passing in arbitrary strings and dictionaries, things get a little hard to understand unless you're already a guru in a given module; and: Makes it difficult and tedious to switch between frameworks. Switching between languages is already a challenge, but it's manageable if you have a strong enough understanding of their core functionality/philosophy. Switching between frameworks seems to be more a matter of rote memorization, which in some ways seems to encourage the very inefficiency these frameworks were designed to eliminate. Do we really need to put like 50 layers of abstraction on top of something as simple as a MySQL query? Why not use something like PHP's PDO interface, where prepared statements/input testing is handled but the universally understandable SQL query is still a part of the function? Are those abstractions really useful? Isn't feature bloat making them useless, making applications more difficult compared to similar applications written without using a framework?
Frameworks can be tricky indeed. Problems can easily arise when a framework is too "opinionated", i.e. when it really prefers one particular style of application and all parts are geared towards supporting this particular style. For instance, if the framework completely abstracts the authentication process of a user by allowing you to just add one component, add a login template somewhere and voila, you get user authentication for free. This saved you a lot of repetitive work in worrying about cookies, session storage, password hashing and whatnot. The problems begin when you realize that the default behavior of the framework's authentication code is not what you need. Maybe it's not following the latest best security practices. Maybe you need a custom hook in the process to trigger some action, but the framework doesn't offer one. Maybe you need to change the details of the cookie that is being set, but the framework offers no way to customize this. The abstraction afforded by the framework allowed you to add an important feature to your site within minutes instead of days initially, but in the end you may have to fight against the framework to make it do what you need it to do, or you'll have to reinvent the functionality from scratch again anyway to suit your needs. This is not to say framework abstractions are bad, mind you. It is to say that this is always a possibility you need to keep in mind. Some frameworks are explicitly geared towards this, they offer a way to get something up very quickly as a prototype or even production framework for a very specific, limited type of app. Other frameworks are more like loose collections of components which you can use, but which still allow you a lot of flexibility to change it around later. When using an abstraction, you should always understand at least roughly what it's abstracting away. If you don't understand cookies and authentication systems, starting from an abstraction is not a good idea. If you do understand what you're trying to do and just need code which already does it instead of having to tediously write your own, abstractions are a great time saver. Poorly written abstractions can get you into trouble later though, so it's a double edged sword. You should also distinguish between technical abstractions and "business rule" abstractions. Programming in a higher level language is an abstraction you probably don't want to miss (Python, PHP, C# vs. C vs. Assembler; Less vs. CSS), while "business rule abstractions" can be difficult if they don't meet your needs exactly (one-click authentication vs. hand-coding cookies). That's because technical abstractions rarely "leak", i.e. you will hardly have to debug machine code when writing applications in Python. Business rule abstractions work on the same technical level though and are just "code bundles" really. You probably will need to debug the cookie that is set or the password hash that is created at some point, which means you'll be diving through lots of 3rd party code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190797", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/84520/" ] }
190,891
Although Joda is feature rich and more sophisticated than standard Java time, it may not always be the best thing to use. How do I decide if I should use Joda Time or Java Time in any Java code? Is there some kind of guideline which tells us how to pick the right one depending on our requirements?
Joda Time is such an improvement over the Java time library that it is almost always the right choice, apart from the following exceptions: When it is difficult or undesirable to add third party dependencies to your project When its use in a public interface would cause issues, e.g. getting an ORM to handle both java and Joda time fields However, in the case of 2) it would still be better to use Joda internally if possible. The above things are worth keeping in mind, but should be rare. If in doubt, go with Joda.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190891", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/84619/" ] }
190,955
I see most immutable POJOs written like this: public class MyObject { private final String foo; private final int bar; public MyObject(String foo, int bar) { this.foo = foo; this.bar = bar; } public String getFoo() { return foo; } public int getBar() { return bar; } } Yet I tend to write them like this: public class MyObject { public final String foo; public final int bar; public MyObject(String foo, int bar) { this.foo = foo; this.bar = bar; } } Note the references are final, so the Object is still immutable. It lets me write less code and allows shorter (by 5 chars: the get and () ) access. The only disadvantage I can see is if you want to change the implementation of getFoo() down the road to do something crazy, you can't. But realistically, this never happens because the Object is immutable; you can verify during instantiation, create immutable defensive copies during instantiation (see Guava's ImmutableList for example), and get the foo or bar objects ready for the get call. Are there any disadvantages I'm missing? EDIT I suppose another disadvantage I'm missing is serialization libraries using reflection over methods starting with get or is , but that's a pretty terrible practice...
Four disadvantages that I can think of: If you want to have a read-only and mutable form of the same entity, a common pattern is to have an immutable class Entity that exposes only accessors with protected member variables, then create a MutableEntity which extends it and adds setters. Your version prevents it. The use of getters and setters adheres to the JavaBeans convention. If you want to use your class as a bean in property-based technologies, like JSTL or EL, you need to expose public getters. If you ever want to change the implementation to derive the values or look them up in the database, you'd have to refactor client code. An accessor/mutator approach allows you to only change the implementation. Least astonishment - when I see public instance variables, I immediately look for who may be mutating it and worry that I am opening pandora's box because encapsulation is lost. http://en.wikipedia.org/wiki/Principle_of_least_astonishment That said, your version is definitely more concise. If this were a specialized class that is only used within a specific package (maybe package scope is a good idea here), then I may consider this for one-offs. But I would not expose major API's like this.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190955", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/76334/" ] }
190,962
I'm managing a small team of developers. Every so often we decide we're going to spend a day or two to clean up our code. Would it be a good idea to schedule regular time, say 1 week every 2 months, to just cleaning up our codebase?
No. Fix it while you're working on it: If you wait to refactor the bit you're working on, you'll forget a lot about it, and have to spend time to get familiar with it again. You won't end up "gold-plating" code that ends up never being used because requirements changed
{ "source": [ "https://softwareengineering.stackexchange.com/questions/190962", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/84667/" ] }
191,003
I'm creating a piece of software, that will run on windows and will act like launcher for the game, to serve as an auto-updater and file verifier in client side PC. One thing I don't understand, why my antivirus software (Avast) is considering my exe file as dangerous and won't start it without asking to put it into sandbox, for safe use. Is there any rules that my software should obey, to be treated as good, or should I pay hundreds of dollars for some sort of digital signing and other stuff? I'm using C# with MS Visual Studio 2010. VirusTotal report . No DLL injections, working as remote file downloader, using WebClient() class. It is not like it warns about virus, but it "suggests" to sandbox it. Look at screenshot:
"File prevalence/reputation is low" means Avast uses a reputation system based on the usage of the program. Only if your program has been installed and 'marked as benevolent' by enough users will it develop a good reputation and will this suggestion go away. Avast calls this the FileRep cloud feature and says "All new unknown files are potentially dangerous. Whenever they have become widespread, there will not be a reason to AutoSandbox them anymore". This is a PITA for small software companies (and Avast is not the only one doing this, note e.g. Symantec's Suspicious Insight" ). One thing Avast suggests is "you can accelerate the process if you digitally sign the files." Locally (on your computer) you can go to autosandbox expert settings and disable autosandboxing files with a low reputation, or maybe use a self-signed certificate, but that won't help you with your end users. For those I suggest you do use a real certificate (costs money, but Windows likes it too), and update your documentation with this info. Maybe there's more suggestions at the Avast forums as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191003", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35316/" ] }
191,010
I do not consider myself a DDD expert but, as a solution architect, do try to apply best practices whenever possible. I know there is a lot of discussion around the pro's and con's of the no (public) setter "style" in DDD and I can see both sides of the argument. My problem is that I work on a team with a wide diversity in skills, knowledge and experience meaning that I cannot trust that every developer will do things the "right" way. For instance, if our domain objects are designed so that changes to the object's internal state is performed by a method but provide public property setters, someone will inevitable set the property instead of calling the method. Use this example: public class MyClass { public Boolean IsPublished { get { return PublishDate != null; } } public DateTime? PublishDate { get; set; } public void Publish() { if (IsPublished) throw new InvalidOperationException("Already published."); PublishDate = DateTime.Today; Raise(new PublishedEvent()); } } My solution has been to make property setters private which is possible because the ORM we are using to hydrate the objects uses reflection so it is able to access private setters. However, this presents a problem when trying to write unit tests. For example, when I want to write a unit test that verifies the requirement that we can't re-publish, I need to indicate that the object has already been published. I can certainly do this by calling Publish twice, but then my test is assuming that Publish is implemented correctly for the first call. That seems a little smelly. Let's make the scenario a little more real-world with the following code: public class Document { public Document(String title) { if (String.IsNullOrWhiteSpace(title)) throw new ArgumentException("title"); Title = title; } public String ApprovedBy { get; private set; } public DateTime? ApprovedOn { get; private set; } public Boolean IsApproved { get; private set; } public Boolean IsPublished { get; private set; } public String PublishedBy { get; private set; } public DateTime? PublishedOn { get; private set; } public String Title { get; private set; } public void Approve(String by) { if (IsApproved) throw new InvalidOperationException("Already approved."); ApprovedBy = by; ApprovedOn = DateTime.Today; IsApproved = true; Raise(new ApprovedEvent(Title)); } public void Publish(String by) { if (IsPublished) throw new InvalidOperationException("Already published."); if (!IsApproved) throw new InvalidOperationException("Cannot publish until approved."); PublishedBy = by; PublishedOn = DateTime.Today; IsPublished = true; Raise(new PublishedEvent(Title)); } } I want to write unit tests that verify: I cannot publish unless the Document has been approved I cannot re-publish a Document When published, the PublishedBy and PublishedOn values are properly set When publised, the PublishedEvent is raised Without access to the setters, I cannot put the object into the state needed to perform the tests. Opening access to the setters defeats the purpose of preventing access. How do(have) you solve(d) this problem?
I cannot put the object into the state needed to perform the tests. If you cannot put the object into the state needed to perform a test, then you cannot put the object into the state in production code, so there's no need to test that state. Obviously, this isn't true in your case, you can put your object into the needed state, just call Approve. I cannot publish unless the Document has been approved: write a test that calling publish before calling approve causes the right error without changing the object state. void testPublishBeforeApprove() { doc = new Document("Doc"); AssertRaises(doc.publish, ..., NotApprovedException); } I cannot re-publish a Document: write a test that approves an object, then calling publish once succeed, but second time causes the right error without changing the object state. void testRePublish() { doc = new Document("Doc"); doc.approve(); doc.publish(); AssertRaises(doc.publish, ..., RepublishException); } When published, the PublishedBy and PublishedOn values are properly set: write a test that calls approve then call publish, assert that the object state changes correctly void testPublish() { doc = new Document("Doc"); doc.approve(); doc.publish(); Assert(doc.PublishedBy, ...); ... } When publised, the PublishedEvent is raised: hook to the event system and set a flag to make sure it's called You also need to write test for approve. In other word, don't test the relation between internal fields and IsPublished and IsApproved, your test would be quite fragile if you do that since changing your field would mean changing your tests code, so the test would be quite pointless. Instead you should test the relationship between calls of public methods, this way, even if you modify the fields you wouldn't need to modify the test.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191010", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23991/" ] }
191,033
This is a question about logistics, not a technical question. My company has outsourced some embedded software work. Specifically, we have payed a contractor to develop an embedded system for us since we do not have adequate in-house knowledge to do it ourselves (we only have desktop application developers). So, the contractors have finished the software and they have asked if they may deliver it to us in a virtual machine. The VM is a Windows 8 machine containing the pre-configured CodeWarrior IDE with the source code as a CodeWarrior project. The idea is that this will allow us to make code changes within the VM that is already configured for further development of this project. Are there any drawbacks to doing this as opposed to having them walk us through how to configure our own development machines to make code changes to the project? The only problem I can foresee is the VM running slowly and it taking a long time to rebuild the project when we make code changes. But on the other hand, I like the idea of getting a pre-configured embedded system development environment so I don't have to add yet another IDE on my desktop application dev machine. I can't really think of a good reason why not to accept a VM deliverable, but I just wanted to run it by this community in case there's something I'm missing.
The problem I see is that the knowledge of setting up and configuring the virtual machine is not in-house, and if configuration is non-trivial then you'll be relying on the other company when the software needs to be configured for different versions of the OS/libraries/hardware/whatever. Accepting the VM is fine to get up and running faster, but I'd insist on getting walk-through on how to configure your own system for future maintenance.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191033", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/49209/" ] }
191,103
I am currently a professional programmer. I want to expand my skillset, but I also want to make the career jump to being a dev lead as part of a team. I know there's got to be a lot to learn (and this won't be an instant thing) but I think I'm smart enough to do it and I'm up to the challenge. I'm sure that many of the members here have probably gone through this themselves, and are now successful dev leads. Unfortunately, even though I know some personal areas I'd like to improve (depth of knowledge, breadth of knowledge, skillsets, etc), I'm not really sure how I would start something like this. As a programmer now, what steps should I take to get me to this goal? What should I prioritize?
To become a technical lead the following are essential The ability to mentor staff members at all level of seniority, from someone who has been out of uni for 3 months to a person who has been programming for 30 years A good knowledge of your development domain. This includes: languages, frameworks, utilities, development environments A solid understanding of issue management systems, project management skills and version control Be the go-to bug killer Know how to conduct timely code reviews, what to look for and how to minimise the amount of time they take to hold and for the changes to be made Keep up-to-date with the developments in your development domain. For example, if you didn't learn new frameworks or technologies from .NET 2, you'd be doing things in quite a backwards way today. How to write unit tests and mocks, and to get your developers to write them too Knowledge of what design patterns are and when to use them Knowledge of what code smells are and how to mitigate them Continuous integration The ability to plan projects and releases Depending on your organisation and whether you have architects on staff, you would probably need to know the following: The ability to componentize your projects and break it into functional parts A thorough understanding of security, including the correct way of handling passwords, separating systems, securing data, etc Enterprise concepts such as service buses, message queues, BizTalk Enterprise design patterns Service architectures / RPC such as SOAP and REST ORM frameworks such as Hibernate, Entity Framework, Doctrine Continual deployment The cloud The ability to recommend the correct technologies to use for a project. This might be difficult if your team / shop only does .NET, or PHP, or Java. Design the application in such a way as future enhancements will be easily accommodated If you are going to be a development manager then you will also need: Interviewing skills and how to find the right staff How to deal with people problems with your team members Managing business directives/goals and converting relevant ones to information for your developers The ability to estimate the time for programmers of varying skills The ability to allocate tasks to the correct developers based on their skills and abilities And finally, some other recommended points: Learn outside of your development domain Learn to say NO when things aren't possible or are out of scope or conflict with restraints such as budget or time. Managing a team is a challenging role to be in. You need to be the person that can answer any question, you need to know the right technologies to use (unless you have an architect), you have to have people management skills and be approachable by your staff (assuming a management position). In addition to this, you need to have accurate estimating skills to ensure project profitability and you need to be able to get your hands dirty with anyone's code to pinpoint problems and fix them quickly. You need to avoid wanting to do everything yourself and to foster a team environment that is not toxic. You need to continually stay on top of your technology stack and learn the latest developments and techniques, as well as broader industry-wide trends. You should also really know at least one database platform, and know it well. Know how to do replication, stored procedures, how the query optimiser works, and how to design a schema properly, and what fields to index. Regardless of the exact position, any senior role requires you to have the ability to communicate effectively. If you're not a confident speaker, look at doing something like Toast Masters (public speaking). Learn how to make and hold eye contact. Be confident. Dress appropriately for the position. Lead by example.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191103", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
191,309
I've just taken on a new job at a college as (the sole) Web application developer. The college has a number of disparate but all pretty badly coded legacy systems. Mostly built in PHP they deal with things like attendance, exam results, marking etc. My first job is to build a system that incorporates a lot of this data, which is currently resting in various databases without any kind of friendly API to pull it out (the existing systems are coded in vanilla PHP with no separation of data and view) with a new platform for recording pastoral information about students and presents it to tutors and senior staff in a useful manner so they can react to issues with students quickly. In our first meeting, there were 18 people! There was no clear leader or voice that represented the majority. No identifiable client . The meeting swung from detailed implementation ideas on minor features from heads of faculty to arguments about whether we should use Excel spreadsheets or not for data input! As you can imagine my head was spinning at the end. I actually had a lot of good ideas but I couldn't get them heard. This is a very new role for me, before I was part of a development team in a marketing agency. We had very well defined roles: Project Manager, Client, Designer, Developer. I'd like to know if any seasoned developers or managers out their can give me some pointers on how I can whip my colleagues up into something that resembles a project team. Is agile the way to go? How would you approach handling all the disparate voices? It's clear that some process needs to be put in place very quickly, I'm just not sure what that is.
I would not expect any "agile development process" here as a solution to your current problem. First thing for you should be: clear your mission . That means: clarify what your own responsibilities are clarify what the responsibilities of the other stakeholders are identify who is responsible for each of the legacy systems if there is no client (yet) for your web application, find one who is going to use it in the future and ask for permission to incorporate him as a representative user of your system (a person you can discuss the requirements with) if there are different stakeholders with different goals, collect their requirements (for example, by interviewing them one-by-one, not 18 people alltogether in one room). Write the results on a list. Afterwards, start priorizing. write down a roadmap (the big picture) and a small spec for release 0.1 and make your boss as well as the representative client agree to it formally EDIT: see GlenH7`s comment This can take a while, you will probably don't write much code at this stage of the project. In such a situation, you should do some "requirements engineering" first. But start small, think big. Once you have developed your first release, you will have something to show around, discuss requirements again with the stakeholders etc.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191309", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59642/" ] }
191,349
These are Robert C. Martin's rules for TDD : You are not allowed to write any production code unless it is to make a failing unit test pass. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures. You are not allowed to write any more production code than is sufficient to pass the one failing unit test. When I write a test that seems worthwhile but passes without changing production code: Does that mean I did something wrong? Should I avoid writing such tests in the future if it can be helped? Should I leave that test there or remove it? Note: I was trying to ask this question here: Can I start with a passing unit test? But I wasn't able to articulate the question well enough until now.
It says you can't write production code unless it's to get a failing unit test to pass, not that you can't write a test that passes from the get-go. The intent of the rule is to say "If you need to edit production code, make sure that you write or change a test for it first." Sometimes we write tests to prove a theory. The test passes and that disproves our theory. We don't then remove the test. However, we might (knowing that we have the backing of source control) break production code, to make sure that we understand why it passed when we didn't expect it to. If it turns out to be a valid and correct test, and it isn't duplicating an existing test, leave it there.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191349", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56945/" ] }
191,372
I've always had this struggle to get folks to update their issues, both at my company and at work. I've had a few cases when people actually do it from the goodness of their heart, but ~70% of the time I have to be chasing people down. Being the one that generally does some or other form of management (I am firstmost a developer), the main reason I try to give is that I don't want to be chasing people down and interrupting to query on progress, but I don't think in the end people mind that much being asked. In some rare and extreme cases I end up updating their tickets (when I need to create reports). So, have you ran into this problem?, how have you encouraged developers to update the issue tracker frequently?, what degree of success have you had?
The reason is they don't grok why they should be updating the issue tracker, apart from the fact that you say so. Why is that? My guess is that updating the tracker doesn't affect their job in any meaningful way, so the solution is probably to implement a tracking system which actually helps them do their job better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191372", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10869/" ] }
191,406
Java has int and Integer boolean and Boolean This seems a bit inconsistent, why not either bool vs Boolean to use an established shorter name for primitive type? or integer vs Integer to keep type names consistent? I think C++ had decided to use bool quite a bit earlier than Java decided to use boolean , and maybe also some (non-standard at the time?) C extensions too, so there would have been historical precedence for bool . I've noticed I often instinctively try to use bool at first (good thing modern editors immediately spot this without extra compilation round), so it'd be nice to know the rationale behind current state of affairs. If someone remembers (a part of) the story, or can even find and link to relevant historical discussion in the net, that would be great.
Without getting in contact with people who were actually involved in these design decisions, I think we're unlikely to find a definitive answer. However, based on the timelines of the development of both Java and C++, I would conjecture that Java's boolean was chosen before, or contemporaneously with, the introduction bool to C++, and certainly before bool was in wide use. It is possible that boolean was chosen due to its longer history of use (as in Boolean Algebra), or to match other languages (such as Pascal) which already had a boolean type. Historical context According to Evolving a language in and for the real world: C++ 1991-2006 , the bool type was introduced to C++ in 1993. Java included boolean in its first release in 1995 ( Java Language Specification 1.0 ). The earliest language specification I can find is the Oak 0.2 specification ( Oak was later renamed to Java ). That Oak specification is marked "Copyright 1994", but the project itself was started in 1991, and apparently had a working demo by the summer of 1992 .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191406", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/70135/" ] }
191,443
As far as I know, most relational databases do not offer any driver-level API for queries, except a query function which takes an SQL string as an argument. I'm thinking how easier it would be if one could do: var result = mysql.select('article', {id: 3}) For joined tables, it would be slightly more complex, but still possible. For example: var tables = mysql.join({tables: ['article', 'category'], on: 'categoryID'}); mysql.select(tables, {'article.id': 3}, ['article.title', 'article.body', 'category.categoryID']) Cleaner code, no string parsing overhead, no injection problems, easier reuse of query elements... I can see a lot of advantages. Is there a specific reason why it was chosen to only provide access to queries through SQL?
Databases are out of process - they run on a different server usually. So even if you had an API, it would need to send something across the wire that represents your query and all of its projections, filters, groups, subqueries, expressions, joins, aggregate functions etc. That something could be XML or JSON or some proprietary format, but it may as well be SQL because that is tried, tested and supported. It is less common these days to build up SQL commands yourself - many people use some sort of ORM. Even though these ultimately translate into SQL statements, they may provide the API you are after.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191443", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55105/" ] }
191,531
What about confirming the functionality in positive tests, proving it is working - should I say it is a waste of time? What kind of concept is behind this quote? Unsuccessful tests, i.e. tests that do not find errors are a waste of time. Web Engineering: The Discipline of Systematic Development of Web Applications quoting Cem Kaner .
I wrote most of Testing Computer Software over 25 years ago. I've since pointed to several parts of the book that I consider outdated, or simply wrong. See http://www.kaner.com/pdfs/TheOngoingRevolution.pdf You can see more (current views, but without explicit pointers back to TCS) at my site for the Black Box Software Testing Course (videos and slides available for free), www.testingeducation.org/BBST The testing culture back then was largely confirmatory. In modern testing, the approach to unit testing is largely confirmatory--we write large collections of automated tests that simply verify that the software continues to perform as intended. The tests serve as change detectors--if something in other parts of the code and this part now has problems, or if data values that used to be impossible in the real world are now reaching the application, then the change detectors fire, alerting the programmer to a maintenance problem. I think the confirmatory mindset is appropriate for unit testing, but imagine a world in which all of system testing was confirmatory (for folks who make a distinction, please interpret "system integration testing" and "acceptance testing" as included in my comments on system testing.) The point of testing was to confirm that the program met its specifications and the dominant approach was to build a zillion (or at least a few hundred) system-level regression tests that mapped parts of the spec to behaviors of the program. (I think spec-to-behavior confirmation is useful, but I think it is a small portion of a larger objective.) There are still test groups that operate this way, but it is no longer the dominant view. Back then, it was. I wrote emphatically and drew sharp contrasts to make a point to people who were consistently being trained in this mindset. Today, some of the sharp contrasts (including the one quoted here) are outdated. They get misinterpreted as attacks on the wrong views. As I see it, software testing is an empirical process for learning quality-related information about a software product or service. A test should be designed to reveal useful information. Back then, by the way, no one talked about testing as a method for revealing "information". Back then, testing was either for (some version of ...) finding bugs or for (some version of ... ) verifying (checking) the program against specifications. I don't think that the assertion that tests are for revealing useful information came into the testing vocabulary until this century. Imagine rating tests in terms of their information value. A test that is very likely to teach us something we don't know about the software would have a very high information value. A test that is very likely to confirm something that we already expect and that has already been demonstrated many times before, would have a low information value. One way to prioritize tests is to run higher information value tests before lower information value tests. If I was to oversimplify this prioritization so that it would attract the attention of a programmer, project manager, or process manager who is clueless about software testing, I would say "A TEST THAT IS NOT DESIGNED TO REVEAL A BUG IS A WASTE OF TIME." It's not a perfect translation, but for readers who cannot or will not understand any subtlety or qualification, that's as close as it's going to get. Back then, and I see it again here, some of the people who don't understand testing would respond that a test designed to find corner cases is a waste of time compared to a test of a major use of a major function. They don't understand two things. First, by the time testers find time to check boundary values, the major uses of the major functions have already been exercised several times. (Yes, there are exceptions, and most test groups will pay careful attention to those exceptions.) Second, the reason to test with extreme values is that the program is more likely to fail with extreme values. If it doesn't fail at the extreme, you test something else. This is an efficient rule. On the other hand, if it DOES fail at an extreme value, the tester might stop and report a bug or the tester might troubleshoot further, to see whether the program fails in the same way at more normal values. Who does that troubleshooting (the tester or the programmer) is a matter of corporate culture. Some companies budget the tester's time for this, some budget the programmers, and some expect programmers to fix corner case bugs whether they are generalizable or not so that troubleshooting is not relevant. The common misunderstanding -- that testers are wasting time (rather than maximizing efficiency) by testing extreme values is another reason that "A test that is not designed to reveal a bug is a waste of time" is an appropriate message for testers. It's a counterpoint to the encouragement from some programmers to (in effect) never run tests that might challenge the program. The message is oversimplified, but the entire discussion is oversimplified. By the way, "information value" can't be the only prioritization system. It's not my rule when I design unit test suites. It's not my rule when I design build verification tests (aka sanity checks). In both of those cases, I'm more interested in types of coverage than in the power of the individual tests. There are other cases (e.g. high-volume automated tests that are cheap to set up, run and monitor) where power of individual tests is simply irrelevant to my design. I'm sure you can think of additional examples. But as a general rule, if I could state only one rule (e.g. speaking to an executive whose head explodes if he tries to process more than one sentence), it would be that a low information-value test is usually a waste of time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191531", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60327/" ] }
191,597
Currently our company develops applications consisting, most of the time, in Ruby on Rails web servers and a bunch of different REST clients, from kiosk systems in Java to embedded devices in C/C++ (besides the interfaces for standard web browsers). We need to expand our team and, having failed at finding good senior programmers, we decided to put some effort into training junior programmers who would grow together with the company. We've already given them some Ruby and Rails books and asked them to build some toy programs, but I'm now realizing how steep the learning curve for the current state of web programming is. When I started programming 15 years ago I used only Delphi and Source Safe and was able to produce usable software right from the beginning. They were both simple tools and it was easy to delve into the inner workings of the environment. Slowly I started using third-party frameworks, have switched to CVS, SVN and finally Git, learned the pieces which make today's web, like HTTP, JavaScript, CSS, REST etc. Today, even after years of experience, I don't know as much about how Ruby on Rails works inside as I did in the past about Delphi, and for me that was important so I could connect the basic learning blocks to the tools I was using. It seems to me that the programmers I'm hiring will take a long time to integrate with the team and produce something usable, because there's so many things to learn to use a single framework (Rails): Ruby, HTML, CSS, JavaScript, REST, test-cases, database access (with SQL magically built inside the framework!), MVC, three different package managers (apt for Ubuntu, gem and bundler for Ruby), ssh, git, Apache and Phusion Passenger for deploying, etc. I'm feeling lost since it's the first time I need to deal directly with junior programmers. What is the best way to train junior programmers in today's best practices for web development when there are so many choices?
Many people won't like this idea, but I am advocating this wherever I can: regardless of the programming language and environment, if they don't have any experience and if there are maintenance tasks which come up from real world bug reports of customers of yours, try to make sure they get assigned to that kind of task at least for 30-40% (+) of their time. "Here's the bug report, have a look at it, solve it. If you don't know what it's all about, communicate with experienced colleagues, google it, whatever". Real work on real problems, no toys, at least: not only toys. Make sure, too, that someone with lots of experience has a look at what they were doing before it gets released and shipped to the customer, of course. Make sure the new colleague gets honest feedback on what he did from colleagues and customers. Choose these tasks carefully in order to not overburden them, but keep in mind that some day you want them to do their work independently. Doing bug fixing is learning on the job which lets them work on code which get's actually executed and has some relevance (otherwise there would be no bug reports) and will show them in many examples how to not do it. The focus is automatically put on pain points. They'll start learning those details which are actually causing trouble. It also puts real responsibility onto their shoulders right from the beginning, which (while maintenance as such is not really attractive) can be rather motivating if they get it done to the satisfaction of the customer/end user. Going through what they did will be taken more seriously by your seniors cause they know the impact if things go wrong, and that way it will also simplify integration into the team, cause it will make them talk to each other automatically, as well. The point is not to set them productive from the first moment (as it might look like). The point is to make sure that they know they are supposed to do something valuable right from the first moment, and to put a focus on what matters most without the need to actually create a list. I do have some years of experience working every now and then with people coming directly from college into their new developer job, and the worst results I got to see was usually when someone without at least some experience in maintenance was asked to do new application development. Just make sure they always have someone they can ask for support if they feel lost.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191597", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60458/" ] }
191,623
I have a project where I need to allow users to run arbitrary, untrusted python code ( a bit like this ) against my server. I'm fairly new to python and I'd like to avoid making any mistakes that introduce security holes or other vulnerabilities into the system. Are there any best-practices available, recommended reading, or other pointers you can give me make my service usable but not abusable? Here's what I've considered so far: Remove __builtins__ from the exec context to prohibit use of potentially dangerous packages like os . Users will only be able to use packages I provide to them. Use threads to enforce a reasonable timeout. I'd like to limit the total amount of memory that can be allocated within the exec context, but I'm not sure if it's even possible. There are some alternatives to a straight exec , but I'm not sure which of these would be helpful here: Using an ast.NodeVisitor to catch any attempt to access unsafe objects. But what objects should I prohibit? Searching for any double-underscores in the input. (less graceful than the above option). Using PyPy or something similar to sandbox the code. NOTE: I'm aware that there is at least one JavaScript-based interpreter. That will not work in my scenario.
Python sandboxing is hard . Python is inherently introspectable, at multiple levels. This also means that you can find the factory methods for specific types from those types themselves, and construct new low-level objects, which will be run directly by the interpreter without limitation. Here are some examples of finding creative ways to break out of Python sandboxes: Ned Batchelder starts with a demonstration how dangerous eval() really is ; eval() is often used to execute Python expressions; as a primitive and naive sandbox for one-liners. He then continued to try and apply the same principles to Python 3 , eventually succeeding to break out with some helpful pointers. Pierre Bourdon uses similar techniques to hack a python system at a hack-a-thon The basic idea is always to find a way to create base Python types; functions and classes and break out of the shell by getting the Python interpreter to execute arbitrary (unchecked!) bytecode. The same and more applies to the exec statement ( exec() function in Python 3). So, you want to: Strictly control the byte compilation of the Python code, or at least post-process the bytecode to remove any access to names starting with underscores. This requires intimate knowledge of how the Python interpreter works and how Python bytecode is structured. Code objects are nested; a module's bytecode only covers the top level of statements, each function and class consists of their own bytecode sequence plus metadata, containing other bytecode objects for nested functions and classes, for example. You need to whitelist modules that can be used. Carefully. A python module contains references to other modules. If you import os , there is a local name os in your module namespace that refers to the os module. This can lead a determined attacker to modules that can help them break out of the sandbox. The pickle module, for example, lets you load arbitrary code objects for example, so if any path through whitelisted modules leads to the pickle module, you have a problem still. You need to strictly limit the time quotas. Even the most neutered code can still attempt to run forever, tying up your resources. Take a look at RestrictedPython , which attempts to give you the strict bytecode control. RestrictedPython transforms Python code into something that lets you control what names, modules and objects are permissible in Python 2.3 through to 2.7. If RestrictedPython is secure enough for your purposes does depend on the policies you implement. Not allowing access to names starting with an underscore and strictly whitelisting the modules would be a start. In my opinion, the only truly robust option is to use a separate Virtual Machine, one with no network access to the outside world which you destroy after each run. Each new script is given a fresh VM instead. That way even if the code manages to break out of your Python sandbox (which is not unlikely) all the attacker gets access to is short-lived and without value.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191623", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81884/" ] }
191,637
Can anyone explain in detail, how exactly the virtual table works and what pointers are associated when virtual functions are called. If they are actually slower, can you show the time that the virtual function takes to execute is more than normal class methods? It is easy to lose track of how/what is happening without seeing some code.
Virtual methods are commonly implemented via so-called virtual method tables (vtable for short), in which function pointers are stored. This adds indirection to the actual call (gotta fetch the address of the function to call from the vtable, then call it -- as opposed to just calling it right ahead). Of course, this takes some time and some more code. However, it is not necessarily the primary cause of slowness. The real problem is that the compiler (generally/usually) cannot know which function will be called. So it can't inline it or perform any other such optimizations. This alone might add a dozen pointless instructions (preparing registers, calling, then restoring state afterwards), and might inhibit other, seemingly unrelated optimizations. Moreover, if you branch like crazy by calling many different implementations, you suffer the same hits you'd suffer from branching like crazy via other means: The cache and branch predictor won't help you, the branches will take longer than a perfectly predictable branch. Big but : These performance hits are usually too tiny to matter. They're worth considering if you want to create a high-performance code and consider adding a virtual function that would be called at alarming frequency. However, also keep in mind that replacing virtual function calls with other means of branching -- if .. else , switch , function pointers, etc. -- won't solve the fundamental issue, and may very well reduce performance. Eliminating unnecessary indirection, whether due to virtual functions or other control flow statements, improves performance. Edit: The difference in the call instructions is described in other answers. Basically, the code for a static ("normal") call is: Copy some registers on the stack, to allow the called function to use those registers. Copy the arguments into predefined locations, so that the called function can find them regardless from where it is called. Push the return address. Branch/jump to the function's code, which is a compile-time address and hence hardcoded in the binary by the compiler/linker. Get the return value from a predefined location and restore registers we want to use. A virtual call does exactly the same thing, except that the function address is not known at compile time. Instead, a couple of instructions ... Get the vtable pointer, which points to an array of function pointers (function addresses), one for each virtual function, from the object. Get the right function address from the vtable into a register (the index where the correct function address is stored is decided at compile-time). Jump to the address in that register, rather than jumping to a hardcoded address. As for branches: A branch is anything which jumps to another instruction instead of just letting the next instruction execute. This includes if , switch , parts of various loops, function calls, etc. and sometimes the compiler implements things that don't seem to branch in a way that actually needs a branch under the hood. See Why is processing a sorted array faster than an unsorted array? for why this may be slow, what CPUs do to counter this slowdown, and how this isn't a cure-all.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191637", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/84610/" ] }
191,738
OK, so I paraphrased. The full quote: The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. -- Alan Kay. I am trying to understand the history of the Internet and the web, and this statement is hard to understand. I have read elsewhere that the Internet is now used for very different things than it was designed for, and so perhaps that factors in. What makes the Internet so well done, and what makes the web so amateurish? (Of course, Alan Kay is fallible, and no one here is Alan Kay, so we can't know precisely why he said that, but what are some possible explanations?) *See also the original interview *.
He actually elaborates on that very topic on the second page of the interview. It's not the technical shortcomings of the protocol he's lamenting, it's the vision of web browser designers. As he put it: You want it to be a mini-operating system, and the people who did the browser mistook it as an application. He gives some specific examples, like the Wikipedia page on a programming language being unable to execute any example programs in that language, and the lack of WYSIWYG editing, even though it was available in desktop applications long before the web existed. 23 years later, and we're just barely managing to start to work around the limitations imposed by the original web browser design decisions.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191738", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58425/" ] }
191,858
First, some context (stuff that most of you know anyway): Every popular programming language has a clear evolution, most of the time marked by its version: you have Java 5, 6, 7 etc., PHP 5.1, 5.2, 5.3 etc. Releasing a new version makes new APIs available, fixes bugs, adds new features, new frameworks etc. So all in all: it's good. But what about the language's (or platform's) problems? If and when there's something wrong in a language, developers either avoid it (if they can) or they learn to live with it. Now, the developers of those languages get a lot of feedback from the programmers that use them. So it kind of makes sense that, as time (and version numbers) goes by, the problems in those languages will slowly but surely go away. Well, not really. Why? Backwards compatibility, that's why. But why is this so? Read below for a more concrete situation. The best way I can explain my question is to use PHP as an example: PHP is loved, and hated by thousands of people. All languages have flaws, but apparently PHP is special. Check out this blog post . It has a very long list of so called flaws in PHP. Now, I'm not a PHP developer (not yet), but I read through all of it and I'm sure that a big chunk of that list are indeed real issues. (Not all of it, since it's potentially subjective). Now, if I was one of the guys who actively develops PHP, I would surely want to fix those problems, one by one. However, if I do that, then code that relies on a particular behavior of the language will break if it runs on the new version. Summing it up in 2 words: backwards compatibility. What I don't understand is: why should I keep PHP backwards compatible? If I release PHP version 8 with all those problems fixed, can't I just put a big warning on it saying: "Don't run old code on this version !"? There is a thing called deprecation. We had it for years and it works. In the context of PHP: look at how these days people actively discourage the use of the mysql_* functions (and instead recommend mysqli_* and PDO). Deprecation works. We can use it. We should use it. If it works for functions, why shouldn't it work for entire languages? Let's say I (the developer of PHP) do this: Launch a new version of PHP (let's say 8) with all of those flaws fixed New projects will start using that version, since it's much better, clearer, more secure etc. However, in order not to abandon older versions of PHP, I keep releasing updates to it, fixing security issues, bugs etc. This makes sense for reasons that I'm not listing here. It's common practice: look for example at how Oracle kept updating version 5.1.x of MySQL, even though it mostly focused on version 5.5.x. After about 3 or 4 years, I stop updating old versions of PHP and leave them to die. This is fine, since in those 3 or 4 years, most projects will have switched to PHP 8 anyway. My question is: Do all these steps make sense? Would it be so hard to do? If it can be done, then why isn't it done? Yes, the downside is that you break backwards compatibility. But isn't that a price worth paying ? As an upside, in 3 or 4 years you'll have a language that has 90 % of its problems fixed.... a language much more pleasant to work with. Its name will ensure its popularity. EDIT : OK, so I didn't expressed myself correctly when I said that in 3 or 4 years people will move to the hypothetical PHP 8. What I meant was: in 3 or 4 years, people will use PHP 8 if they start a new project.
It sounds fine, but rarely works out in practice; people are extremely reluctant to change running code, and even for new, green-field projects they are very reluctant to switch way from a language/version that they already know. Changing existing, running code that "works fine" is not something that ranks high on any project's priority list. Rather than applying effort to things that the managers thought had been paid for already, just to be able to upgrade to a newer release of a language or platform, they will decree that the developers should just stay on the old release "for now". You can try to entice your users with great features only available in the new release, but it's a gamble where you risk decreasing your user base for no clear gain for the language; cool, modern features cannot easily be weighed against the price of fragmented installation base in popular opinion, and you run the risk of getting a reputation for being an "upgrade treadmill" that requires constant effort to keep running when compared to more relaxed languages/platforms. (Obviously, most of this doesn't apply to projects written by hobbyists just for their own pleasure. However (here be flamebait...) PHP is disproportionally rarely chosen by hackers because it's such a pleasure to write with in the first place.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191858", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46212/" ] }
191,913
I was recently talking with a recruiter who wants to put me at a company for a position of Developer in Test. He essentially made it sound like a position where you get to fiddle with new programming techniques and test bugs and improvements in software but where you don't need to worry about standard deadlines. You get to be very creative in your work. But that description was still kinda vague to me. I have been a Web Developer for a number of years now, mostly working in PHP. So I wanted to know if others in the community know more about what these positions typically entail. I know that this might not be a subject appropriate for this forum, but it was the best fit I could find among Stack Exchange and I would really appreciate it if this wasn't closed since there is really no where else here to ask about it. I have tried Googling it, but there isn't a lot of information out there. So what exactly is a Developer in Test?
I am a Software Development Engineer in Test, and have been at 2 separate companies. Currently I work for Microsoft. Broadly speaking, Bryan Oakley is correct: you write software that tests software. Beyond that, it depends on your level of experience, the scope of your responsibilities, and the type of software that the employer would be producing. An SDET position can include writing anything from the basics of feature level verification tests, to writing and maintaining test infrastructure to run those tests. It's also not uncommon to have SDETS that specialize in focused testing for certain types of requirements (testing security, performance/scale, usability, etc. are examples that immediately spring to mind). The description that you received from the recruiter sounds like a poor selling technique. You're not fiddling; you have n days to get automated test coverage over x features deployed in y different supported environments in z languages. Oh, btw: those tests have to run fast enough for the devs to have a quick dev/test cycle because... No standard deadlines? You're in charge of the quality of the product and the release date was set by marketing 6 months ago. The dev team is 6 weeks late delivering a stable build to your test team, and the company isn't pushing that release date (again). Is the product or service stable enough to release to a couple million (billion?) people, on the same day? ...and if ( when ) customers call in with problems... "Why (the hell) didn't you catch it first?" I hope that gives you a bit of an example of what being an SDET is like.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191913", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/23882/" ] }
191,920
I have an event(s) controller: class Event extends CI_Controller{ public function index(){ } public function foo(){ } //Shouldn't be able to use this method unless logged in public function bar(){ } } And I'm trying to organise my code so it's fairly tidy and straightforward. Just now I have a controller named MY_Controller so that only authenticated users can access the methods( edit_event() , add_event() ) of any controllers extending it. However, some of the methods in my controller need to be accessed by unauthenticated users (such as get_event() ). What is a good way of handling this? Should I make two completely separate controllers or extend from the basic event controller and add authenticated methods? Previously I've had a manager controller that handled all methods which required authentication such as add_user , delete_user , add_doc , delete_doc . But it became blotted very quickly and wasn't easy to update or modify the controller (plus it was messy and didn't seem to follow good programming etiquette).
I am a Software Development Engineer in Test, and have been at 2 separate companies. Currently I work for Microsoft. Broadly speaking, Bryan Oakley is correct: you write software that tests software. Beyond that, it depends on your level of experience, the scope of your responsibilities, and the type of software that the employer would be producing. An SDET position can include writing anything from the basics of feature level verification tests, to writing and maintaining test infrastructure to run those tests. It's also not uncommon to have SDETS that specialize in focused testing for certain types of requirements (testing security, performance/scale, usability, etc. are examples that immediately spring to mind). The description that you received from the recruiter sounds like a poor selling technique. You're not fiddling; you have n days to get automated test coverage over x features deployed in y different supported environments in z languages. Oh, btw: those tests have to run fast enough for the devs to have a quick dev/test cycle because... No standard deadlines? You're in charge of the quality of the product and the release date was set by marketing 6 months ago. The dev team is 6 weeks late delivering a stable build to your test team, and the company isn't pushing that release date (again). Is the product or service stable enough to release to a couple million (billion?) people, on the same day? ...and if ( when ) customers call in with problems... "Why (the hell) didn't you catch it first?" I hope that gives you a bit of an example of what being an SDET is like.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191920", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27116/" ] }
191,921
I am currently building a website, which until quite recently was purely PHP. However I am now making trying to have the site use more AJAX, to lessen the page reloads. In PHP I had a lovely object orientated user class with methods for updating data, logging out, and so on. When a user logs on this would be stored as a session variable, and then any page that wishes to do anything the user could just grab it from the session and call it's methods. Clearly this use of php objects and the session doesn't really work with ajax. However I don't want to have to scrap storing the user in an object (which neatens things up somewhat), so don't just want to go down the route of defining a ton of js functions that grab the current username, and use the to do mysql queries through ajax. Am I being stupid here, and what route would people recommend I take.
I am a Software Development Engineer in Test, and have been at 2 separate companies. Currently I work for Microsoft. Broadly speaking, Bryan Oakley is correct: you write software that tests software. Beyond that, it depends on your level of experience, the scope of your responsibilities, and the type of software that the employer would be producing. An SDET position can include writing anything from the basics of feature level verification tests, to writing and maintaining test infrastructure to run those tests. It's also not uncommon to have SDETS that specialize in focused testing for certain types of requirements (testing security, performance/scale, usability, etc. are examples that immediately spring to mind). The description that you received from the recruiter sounds like a poor selling technique. You're not fiddling; you have n days to get automated test coverage over x features deployed in y different supported environments in z languages. Oh, btw: those tests have to run fast enough for the devs to have a quick dev/test cycle because... No standard deadlines? You're in charge of the quality of the product and the release date was set by marketing 6 months ago. The dev team is 6 weeks late delivering a stable build to your test team, and the company isn't pushing that release date (again). Is the product or service stable enough to release to a couple million (billion?) people, on the same day? ...and if ( when ) customers call in with problems... "Why (the hell) didn't you catch it first?" I hope that gives you a bit of an example of what being an SDET is like.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191921", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/85474/" ] }
191,961
Bug tracker for any decent sized project seem like a bit of a no-brainer to me - it makes it really easy to organise hundreds or thousands issues, without issues colliding or getting mixed up. So when I see some really big projects, like Git, using a mailing list as the main method of coordinating maintenance and development, I get a bit blown away. Examples: Git - Community page: ...Bug reports should be sent to this mailing list. Debian bug tracking system , per Wikipedia: ...Its unique feature is that it doesn't have any form of web-interface to edit bug reports - all modification is done through email. Many modern bug trackers have very good integration with email (you can receive comments or notifications about bugs you're watching, or that get assigned to you), as well as to version control systems (commits can be marked as resolving an issue, etc.). Much of this would have to be done manually with a mailing list, and you get tons of emails about bugs you're not interested in. So what are the main advantages of a mailing list over a web-based bug tracker? Why do some big projects only use a mailing list?
The preference you observe looks like a natural consequence of recommendation clearly stated in GNU Coding Standards . It suggests to report bugs by email, as you can see in below quote (I marked bold the part that directly addresses your observations): 4.7.2 --help The standard --help option should output brief documentation for how to invoke the program, on standard output, then exit successfully. Other options and arguments should be ignored once this is seen, and the program should not perform its normal function. Near the end of the ‘--help’ option’s output, please place lines giving the email address for bug reports , the package’s home page (normally ‘http://www.gnu.org/software/pkg’ , and the general page for help using GNU programs. The format should be like this: Report bugs to: mailing-address pkg home page: <http://www.gnu.org/software/pkg/> General help using GNU software: <http://www.gnu.org/gethelp/> It is ok to mention other appropriate mailing lists and web pages. Above preference, in turn, reflects universal acceptance of email as a form of electronic communication. Any user reading --help message like suggested above is supposed to easily understand what to do if they see a bug - mailing is easy. Issue tracker might be (and I think is ) better for a developer working in the project, but for a wider audience it would be harder to present and explain how to use it, especially taking into account wide variety and differences between different issue tracking systems . One project can use Bugzilla, another will stick with JIRA, third with... GNATS , etc etc, etc. There's just no way to present all this "zoo" in a way that would be as standard and uniform as Report bugs to: mailing-address Note above doesn't mean that projects shouldn't be using issue tracker internally . As explained in an excellent answer to related question , Your bug tracker is for your convenience, not your customers'. If you can't be bothered to take their phone or email issue and enter it yourself, how do you think they feel? You need to be able to enter issues and assign them manually to a client...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/191961", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/47367/" ] }
192,027
New to C++! So I was reading this: http://www.learncpp.com/cpp-tutorial/110-a-first-look-at-the-preprocessor/ Header guards Because header files can include other header files, it is possible to end up in the situation where a header file gets included multiple times. So we make preprocessor directives to avoid this. But I'm not sure - why can't the compiler just... not import the same thing twice? Given that header guards are optional (but apparently a good practice), it almost makes me think that there are scenarios when you do want to import something twice. Although I can't think of any such scenario at all. Any ideas?
They can, as shown by new languages that do. But a design decision was made all those years ago (when the C compiler was multiple independent stages) and now to maintain compatibility the pre-processor has to act in a certain way to make sure old code compiles as expected. As C++ inherits the way it processes header files from C it maintained the same techniques. We are supporting a old design decision. But changing the way it works is too risky lots of code could potentially break. So now we have to teach new users of the language how to use include guards. There are a couple of tricks with header files were you deliberately include it multiple times (this does actually provide a useful feature). Though if we redesigned the paradigm from scratch we could make this the non-default way to include files.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/192027", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13833/" ] }
192,044
I have been seeing a lot of projects that have repositories that return instances of IQueryable . This allows additional filters and sorting can be performed on the IQueryable by other code, which translates to different SQL being generated. I am curious where this pattern came from and whether it is a good idea. My biggest concern is that an IQueryable is a promise to hit the database some time later, when it is enumerated. This means that an error would be thrown outside of the repository. This could mean an Entity Framework exception is thrown in a different layer of the application. I have also run into issues with Multiple Active Result Sets (MARS) in the past (especially when using transactions) and this approach sounds like it would lead to this happening more often. I have always called AsEnumerable or ToArray at the end of each of my LINQ expressions to make sure the database is hit before leaving the repository code. I am wondering if returning IQueryable could be useful as a building block for a data layer. I have seen some pretty extravagant code with one repository calling another repository to build an even bigger IQueryable .
Returning IQueryable will definitely afford more flexibility to the consumers of the repository. It puts the responsibility of narrowing results off to the client, which naturally can both be a benefit and a crutch. On the good side, you won't need to be creating tons of repository methods (at least on this layer) — GetAllActiveItems, GetAllNonActiveItems, etc — to get the data you want. Depending on your preference, again, this could be good or bad. You will (/should) need to define behavioral contracts which your implementations adhere to, but where that goes is up to you. So you could put the gritty retrieval logic outside the repository and let it be used however the user wants. So exposing IQueryable gives the most flexibility and allows for efficient querying as opposed to in-memory filtering, etc, and could reduce the need for making a ton of specific data fetching methods. On the other hand, now you have given your users a shotgun. They can do things which you may not have intended (overusing .include(), doing heavy heavy queries and doing in-memory filtering in their respective implementations, etc), which would basically side-step the layering and behavioral controls because you have given full access. So depending on the team, their experience, the size of the app, the overall layering and architecture … it depends :-\
{ "source": [ "https://softwareengineering.stackexchange.com/questions/192044", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/79611/" ] }
192,113
I'm developing a .Net application that uses google protocol buffers. Historically the application used the approach, advocated by the protobuf-net team, of decorating the classes with attributes instead of using .proto files. I am now in the process of migrating part of the application's client to another technology and there is a strong desire to start using the .proto files as the authority so that the two technologies can inter-operate. My plan is to automatically generate the C# from the .proto, however, my question is should I check the resulting files back into source control? Note I expect the process of code generation to be fast. How do I choose appropriate approach for above case? What is considered the best practice?
As a general rule, generated files do not belong in the source code repository. The biggest risk you run when you do put those files in the repository is that they become out of sync with their source and the build runs with different protocol buffer files than you would think based on the .proto files. A few reasons for deviating from the general rule are Your build environment can't handle the additional build step automatically. If you have to generate the files by hand for every build anyway, you might as well put them in the VCS. Then it actually reduces the risk of a version mismatch due to fewer people having to do the generation step. The generation step significantly slows down your build After generating them, the files are further modified by hand. In that case, they are not really generated files any more and thus belong in the VCS. The source files change very rarely (e.g. they come from a third party that only provides updates every few months or so).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/192113", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37972/" ] }
193,218
During a recent project I've been working on, I've had to use a lot of functions that kind of look like this: static bool getGPS(double plane_latitude, double plane_longitude, double plane_altitude, double plane_roll, double plane_pitch, double plane_heading, double gimbal_roll, double gimbal_pitch, double gimbal_yaw, int target_x, int target_y, double zoom, int image_width_pixels, int image_height_pixels, double & Target_Latitude, double & Target_Longitude, double & Target_Height); So I want to refactor it to look something like this: static GPSCoordinate getGPS(GPSCoordinate plane, Angle3D planeAngle, Angle3D gimbalAngle, PixelCoordinate target, ZoomLevel zoom, PixelSize imageSize) This appears to me to be significantly more readable and safe than the first method. But does it make sense to create PixelCoordinate and PixelSize classes? Or would I be better off just using std::pair<int,int> for each. And does it make sense to have a ZoomLevel class, or should I just use a double ? My intuition behind using classes for everything is based on these assumptions: If there are classes for everything, it would be impossible to pass a ZoomLevel in where a Weight object was expected, so it would be more difficult to provide the wrong arguments to a function Likewise, some illegal operations would cause compile errors, such as adding a GPSCoordinate to a ZoomLevel or another GPSCoordinate Legal operations will be easy to represent and typesafe. i.e subtracting two GPSCoordinate s would yield a GPSDisplacement However, most C++ code I've seen uses a lot of primitive types, and I imagine there must be a good reason for that. Is it a good idea to use objects for anything, or does it have downsides that I am not aware of?
Yes, definitely. Functions/methods that take too many arguments is a code smell , and indicates at least one of the following: The function/method is doing too many things at once The function/method requires access to that many things because it's asking, not telling or violating some OO design law The arguments are actually closely related If the last one is the case (and your example certainly suggests so), it's high time to do some of that fancy-pants "abstraction" that the cool kids were talking about oh, just some decades ago. In fact, I'd go further than your example and do something like: static GPSCoordinate getGPS(Plane plane, Angle3D gimbalAngle, GPSView view) (I am not familiar with GPS so this is probably not a correct abstraction; for instance, I don't understand what zoom has to do with a function called getGPS .) Please prefer classes like PixelCoordinate over std::pair<T, T> , even if the two have the same data. It adds semantic value, plus a bit of extra type safety (compiler will stop you from passing a ScreenCoordinate to a PixelCoordinate even though both are pair s.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193218", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/26937/" ] }
193,244
I am looking for a recommendation of a best practice for XML comments in C#. When you create a property, it seems like that the expected XML documentation has the following form: /// <summary> /// Gets or sets the ID the uniquely identifies this <see cref="User" /> instance. /// </summary> public int ID { get; set; } But since the signature of the property already tells you what operations are available to the external clients of the class (in this case it is both get and set ) I feel like the comments are too chatty and that perhaps the following would be sufficient: /// <summary> /// ID that uniquely identifies this <see cref="User" /> instance. /// </summary> public int ID { get; set; } Microsoft uses the first form so it seems like it is an implied convention. But I think that the second one is better for the reasons I stated. I understand that this quetion is an adept for being marked as not being constructive, but the amount of properties that one has to comment is huge and so I believe that this question has its right to be here. I will appreciate any ideas or links to official recommended practices.
The signature may tell other pieces of code what operations are available; however, they are not clearly shown to the coder as he or she is working and XML documentation is meant for people to consume and not a compiler. Take this class for example: public class MyClass { /// <summary> /// The first one /// </summary> public int GetOrSet { get; set; } /// <summary> /// The second one /// </summary> public int GetOnly { get; private set; } /// <summary> /// The last one /// </summary> public int SetOnly { set; private get; } } When intellisense is pulled up to access one of these properties there is no indication which ones can be written to, read from, or both: Likewise when viewing the documentation we also aren't quite sure: As such we add the gets or sets , gets , or sets to make it easier on the programmer while writing the code. It certainly would not be write a large block of code that reads and processes some data only to find out that you cannot write that data back to the property as expected.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193244", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/85526/" ] }
193,282
The upper management at our company has laid out a goal for our software team to be “15% more productive” over the next year. Measuring productivity in a software development environment is very subjective, but we are still required to come up with a set of metrics. What sorts of data can we capture that would measure our team’s productivity?
I try very hard not to write non-answers on this site but I do believe that, in this case, I have to. It's the only right answer. But I'll try to help you out with more than a quip and a "you can't." In all seriousness, there is no valid measure of developer productivity. I know this is hard for managers to cope with, but it's a fact. Refer them to a few links from people very experienced in the field. For a couple of examples: Martin Fowler So not just is business value hard to measure, there's a time lag too. So maybe you can't measure the productivity of a team until a few years after a release of the software they were building. I can see why measuring productivity is so seductive. If we could do it we could assess software much more easily and objectively than we can now. But false measures only make things worse. This is somewhere I think we have to admit to our ignorance. Joel Spolsky Let's start with plain old productivity. It's rather hard to measure programmer productivity; almost any metric you can come up with (lines of debugged code, function points, number of command-line arguments) is trivial to game , and it's very hard to get concrete data on large projects because it's very rare for two programmers to be told to do the same thing. Also ask them who is responsible for that increase. What measures are they allowed to take ? It is my experience that managers set these goals because they have zero clue what goals to set with respect to development teams. Maybe you can help them out with that. Explain to them that you (or the team) want to take your targets seriously, but they have to be SMART or they're meaningless. Suggest to them some targets which are SMART. do you have a build/CI server ? If not, setting one up is a SMART goal. If so, do you have some way of displaying quality statistics ? If not, setting that up is also a SMART goal. If so, then you have something that's very measurable: code quality. Maybe bringing your technical-debt rating down is a SMART goal, which will in turn improve productivity, unless they're assuming that people are slacking off, in which case you have an entirely different problem to solve: visibility. Help them to give you targets you can actually achieve. There's no satisfaction in having goals that cannot be proven or disproven a year from now, or where you'll be wasting time gaming the system rather than improving it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193282", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51152/" ] }
193,332
Some programming languages like e.g. Scala have the concept of Option types (also called Maybe ), which can either contain a value or not. From what I've read about them they are considered widely to be a superior way of dealing with this issue than null , because they explicitly force the programmer to consider the cases where there might not be a value instead of just blowing up during runtime. Checked Exceptions in Java on the other hand seem to be considered a bad idea, and Java seems to be the only widely used language that implements them. But the idea behind them seems to be somewhat similar to the Option type, to explicitly force the programmer to deal with the fact that an exception might be thrown. Are there some additional problems with checked Exceptions that Option types don't have? Or are these ideas not as similar as I think, and there are good reasons for forcing explicit handling for Options and not for Exceptions?
Because Option s are composable. There are a lot of useful methods on Option that allow you to write concise code, while still allowing precise control on the flow: map , flatMap , toList , flatten and more. This is due to the fact that Option is a particular kind of monad, some objects that we know very well how to compose. If you did not have these methods and had to pattern match always on Option , or call isDefined often, they would not be nearly as useful. Instead, while checked exceptions do add some safety, there is not much you can do with them other than catching them or let them bubble up the stack (with the added boilerplate in the type declaration).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193332", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7858/" ] }
193,415
I always try to follow the DRY principle strictly at work; every time I've repeated code out of laziness it bites back later when I need to maintain that code in two places. But often I write small methods (maybe 10 - 15 lines of code) that need to be reused across two projects that can't reference each other. The method might be something to do with networking / strings / MVVM etc. and is a generally useful method not specific to the project it originally sits in. The standard way to reuse this code would be to create an independent project for the reusable code and reference that project when you need it. The problem with this is we end up in one of two less-than-ideal scenarios: We end up with tens/hundreds of tiny projects - each to house the little classes/methods which we needed to reuse. Is it worth creating a whole new .DLL just for a tiny bit of code? We end up with a single project holding a growing collection of unrelated methods and classes. This approach is what a company I used to work for did; they had a project named base.common which had folders for things like I mentioned above: networking, string manipulation, MVVM etc. It was incredibly handy, but referencing it needlessly dragged with it all the irrelevant code you didn't need. So my question is: How does a software team best go about reusing small bits of code between projects? I'm interested particularly if anyone has worked at a company that has policies in this area, or that has come across this dilemma personally as I have. note: My use of the words "Project", "Solution" and "Reference" come from a background in .NET development in Visual Studio. But I'm sure this issue is language and platform independent.
If they really are reusable methods / classes, you could write them into a small number of 'Swiss Army Knife' libraries. We do this quite often at my company; we call them framework libraries: Framework.Data - Utilities for working with database queries. Framework.ESB - Standard methods for interacting with our enterprise service bus Framework.Logging - Unified loging system Framework.Services - Utilities for interacting with web services Framework.Strings - Utilities for advanced string manipulation / fuzzy string searching etc. ... In all, there are about a dozen or so libraries. You can really distribute the code however you see fit, so you don't have to end up with hundreds or dump everything into one giant assembly. I find this approach fits because only some of our projects will need Framework.Data and only a few will ever need Framework.Strings , so consumers can select only those parts of the framework that are relevant to their particular project. If they're really just snippets, and not actual methods / classes that can be easily reused, you could try just distributing them as code snippets into the IDE (e.g. Visual Studio Code Snippets ). Teams I've worked with in the past had a common snippet library that made it easier for everyone to follow our standard coding practices with internal code as well.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193415", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81106/" ] }
193,507
Is the following guaranteed to return true for all numerical and non-zero values of x ? bool IsRoundTrip(double x) { double y = 1 / (1 / x); return x == y; } What conditions would cause a discrepency?
To simplify things by defining a concrete implementation, I will assume (as other answers do) that we're talking about IEEE 754 64-bit floating point. Each floating point number has three parts: a sign, an exponent, and a mantissa. (Technical details about hidden bits are irrelevant to this discussion). Reciprocation doesn't affect the sign 1 / (2**e * m) = (1 / 2**e) * (1 / m) = 2**-e * (1 / m) , so there are two ways in which the double-reciprocation can fail to provide a fixpoint. The easy one is that the exponent can be an extreme value such that we move from a denormalised number to one which overflows. The second is that the mantissa can be a non-fixpoint of the double-reciprocation. I wrote a simple program to test random mantissas: import java.util.Random; strictfp class RoundTrip { public static void main(String[] args) { long one = Double.doubleToLongBits(1.0); Random rnd = new Random(); for (int i = 1; i < 1<<30; i++) { long mantissa = rnd.nextLong() & 0xfffffffffffffL; double x = Double.longBitsToDouble(one + mantissa); double y = 1 / (1 / x); if (x != y) { System.out.println(Long.toHexString(one + mantissa)); System.out.println(x); System.out.println(y); break; } } } } It quickly gave some output: 3ffeca41c09ebb2b 1.9243791126461456 1.9243791126461458 The program can be expected to find an answer if as few as 1 in 2**30 mantissas fail. With a slight modification, I found that about 17.15% of mantissas fail. Slightly handwavy analysis: There are 2**52-1 mantissas covering the open range (1, 2) , and they're uniformly spaced. The same uniformly spaced mantissas cover the open range (0.5, 1) , which contains the reciprocals. Note that in this range one unit in the last place (1ulp), i.e. the difference between consecutive values, has an absolute value half that of the ulp in the range (1, 2) . But reciprocation isn't a linear operation, so in some parts of the range the density of values required is higher than in others. Therefore we expect that the reciprocation will not be injective. Suppose values x and x+dx , both in (1, 2) , differ by 1ulp. If they map to the same reciprocal mantissa, at most one of them can round-trip. What is the probability of this collision? x^-1 differentiates to -x^-2 , so the difference between 1/x and 1/(x+dx) is approximately -dx/x^2 , or -2dx/x^2 ulps, so a difference of one ulp before reciprocation gives a difference of -2/x^2 ulps after reciprocation. Given that the separation between two exactly representable values is 1ulp (by definition), and assuming (for simplification) no particular alignment between mantissas and reciprocal mantissas, we can estimate the probability of a collision as max(0, 1 - 2/x^2) , and we can approximate the proportion of collisions as \int_1^2 max(0, 1 - 2/x^2) dx = \int_{\sqrt 2}^2 (1 - 2/x^2) dx = 3 - 2\sqrt 2 is approximately 0.1716. This is in very good agreement with my empirical results for the proportion of mantissas that don't round-trip, so it seems reasonable to hypothesise that a mantissa will round-trip unless its reciprocal collides with that of another mantissa, in which case only one of the two will round-trip.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193507", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33490/" ] }
193,563
I am considering using sourceforge, bitbucket or github for managing source control for my business. I have open projects and I participate in open projects such as gcc. But I also have a business where I develop closed-source software for my living. How trustworthy are sourceforge, github or bitbucket in terms of keeping software secure from prying eyes? How stable is the hosting in terms of data loss prevention? Has anyone out there based their business logic with such an outfit? Has anyone out there surveyed several of the hosting solutions?
There's no good, standard, way to evaluate the security of providers like this. Stability you can see, somewhat, but security is pretty much impossible to evaluate from the outside. I'd actually talk to the providers you are considering about their security guarantees, and look at their contracts - if they don't make any guarantees, or if their contracts are riddled with 'we can not be held responsible' clauses, then that tells you how seriously they take security, and how much help you can expect if something goes pear shaped. Also, don't evaluate this in a vacuum - think about what it would take for you to run your OWN servers, and how much effort and overhead that would take, and how likely you are to screw it up (leaving a massive security hole) by doing it in house.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193563", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/29882/" ] }
193,638
Every competent Java programmer knows that you need to use String.equals() to compare a string, rather than == because == checks for reference equality. When I'm dealing with strings, most of the time I'm checking for value equality rather than reference equality. It seems to me that it would be more intuitive if the language allowed string values to be compared by just using ==. As a comparison, C#'s == operator checks for value equality for string s. And if you really needed to check for reference equality, you can use String.ReferenceEquals. Another important point is that Strings are immutable, so there is no harm to be done by allowing this feature. Is there any particular reason why this isn't implemented in Java?
I guess it's just consistency, or "principle of least astonishment". String is an object, so it would be surprising if was treated differently than other objects. At the time when Java came out (~1995), merely having something like String was total luxury to most programmers who were accustomed to representing strings as null-terminated arrays. String 's behavior is now what it was back then, and that's good; subtly changing the behavior later on could have surprising, undesired effects in working programs. As a side note, you could use String.intern() to get a canonical (interned) representation of the string, after which comparisons could be made with == . Interning takes some time, but after that, comparisons will be really fast. Addition: unlike some answers suggest, it's not about supporting operator overloading . The + operator (concatenation) works on String s even though Java doesn't support operator overloading; it's simply handled as a special case in the compiler, resolving to StringBuilder.append() . Similarly, == could have been handled as a special case. Then why astonish with special case + but not with == ? Because, + simply doesn't compile when applied to non- String objects so that's quickly apparent. The different behavior of == would be much less apparent and thus much more astonishing when it hits you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/193638", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/62049/" ] }