source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
418,483 | For my use case I need to create a microservice that receives log messages through HTTP. I'm wondering about the reasons to pick either of these: Method: POST
URI: /{logLevel}/
Body: {
"message": "..."
} Method: POST
URI: /
Body: {
"level": {either string or enum},
"message": "..."
} I feel like with the first method I can map each log level to an entirely different flow with ease and avoid validating the values by myself (the library does this using the predefined URIs). But are there reasons to prefer the second method? What do you guys think? | A user is someone who is registered and able to use the system. A chat room is a place people can chat. What happens when a user joins a chat room? What is that thing that represents a user who has joined a chat room? That is the abstraction you are missing. Other answers are hinting at this. You could say a user participates in a chat. You need a class that represents a user participating in a chat room. We can call it ChatRoomParticipant . What does participating in a chat require? A User and a ChatRoom . var participant = new ChatRoomParticipant(chatRoom, user);
participant.SendMessage("Hey, everyone. I'm new here!");
participant.SendImage(File.Open(@"C:\Cat Pictures\Fluffy playing with catnip.jpg"));
participant.SendMessage("Oops. Gotta go. Someone's at my door.");
participant.LeaveChatRoom(); Now you are commanding an object to do something. Send a message. Upload a (cat) picture. Leave the chat. Sometimes you need objects to collaborate, and it is the collaboration of those objects that needs to be modeled in its own class. Don't get too hung up on "classes must be things." You'll miss opportunities like this where the best OO design is to model the collaboration of two or more objects. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/418483",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/259441/"
]
} |
418,518 | I am comparing C++ with Python. It is clear that C++ is much more efficient and that the C++ code compiles directly to machine code whereas in Python it is interpreted. I do understand that Python is a higher-level language. But what difference does it make? I intuitively understand that C++ offers more "control", but what does it mean concretely? Can you give an example of things you can do with C++, but not with Python? | Can you give an example of things you can do with C++ but not with python. Sure. For instance, C++ gives you control over where objects are placed in memory. The programmer decides whether an object is stored on the stack or the heap - and can even control where on the heap by using a custom allocator. This can be helpful when exploiting memory locality effects to improve memory access performance. Also, in C++, you control when an object is destroyed, which allows side effects to be attached to that destruction. For instance, if you have a C++ object for an open file, you control when this object is destroyed, allowing the destructor of that object to automatically and promptly release the native file handle. In Python, you have no control when the object is freed, and therefore have to close the file manually. In C++, you can also perform crazy optimizations by manipulating pointers. I remember one memorable case where a program had to store a great many object references representing boolean functions, some of which were negated. Rather than storing the negation in a separate variable, they stored it in the least significant bit of the pointer, which was known to always be 0 due to memory alignment. This allowed them to cut their memory use in half. They couldn't have done this in Python. It is clear that C++ is much more efficient Not necessarily: That the programmer has this control doesn't necessarily imply he will use that control better than Python does. After all, the guys that write Python runtimes are quite skilled software developers, and probably know more about low level performance optimization than the average C++ programmer. So if you are choosing between C++ and Python, it is true that C++ gives you more control - but it is also true that C++ demands that control. You must manage memory. You must ensure you never use after free. And so on. Are the benefits of having that control worth spending the time to exercise it? Or would you rather have the language runtime take care of these details, so you can focus on other things? The answer will depend on what kind of software you are writing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/418518",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/378615/"
]
} |
419,025 | While going through the README file of a GitHub repository I am not a contributor of, I noticed a few minor typos and wondered if I should submit a pull request to correct them or if reviewing the request would take the maintainer too much time to be worth it. I considered the three courses of action: Correct the typos and submit a pull request with the position of each correction in the summary field. Send the maintainer an email with the proposed corrections. Do nothing. Which of these (or any other) options is most appropriate? To give some context, the repository is actively maintained and has about 10 contributors. The typos I have noticed do not make the README misleading nor ambiguous. I have used GitHub for some time for small personal projects, but have little experience with pull requests and how much time and efforts they take to review (hence my question). | Just fix all the typos you noticed and create a pull request with a comment along the lines of 'Fix typos'. Then it's one button to click for a person with the correct access. You don't need to explain each and every typo; it will be clear in the diff itself. Getting information about the typos from the e-mail will be harder to apply for the developers (and may be even harder for you). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/419025",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/379469/"
]
} |
419,570 | Let me explain what I mean. I have made a complex, highly polished over years PHP framework/library for my own use. I very aggressively log the smallest notice and immediately deal with it as soon as it pops ups, always trying to predict potential errors in my code as to never have them occur even in rare situations, but rather handling them automatically before they get logged. However, in spite of all my efforts, inevitably, I wake up (such as today) to find that some third-party service has fiddles around with their file format for one of their CSV files of data that they provide on their website and which my system fetches and imports every day. Then I get a flood of ugly PHP errors. Ouch. Even though it looks scary at first, it's typically just a pretty simple fix, and it's typically really just ONE error, which cascades into tons of apparent errors because the chain of function calls "fall apart" as each one expects something that they no longer get. I fix the issue, clear the errors, re-run the logic, verify that it no longer causes any errors, and then it's fixed. For now. Until the same thing happens again, with some other part of the system. I can personally "deal with" this, but it really bothers me in terms of giving away my system to somebody else to run on their machines. If/when the same kind of thing happens for them, they will doubtlessly blame me and think I'm incompetent (which may be true). But even for myself, this is quite annoying and makes me feel as if my system is very fragile and a house of cards waiting to fall apart, in spite of there normally being not a single little notice or warning logged during "normal operation". Short of predicting every possible change and writing enormous amounts of extra "checking" code to verify that all data is always exactly what is expected, is there anything I can do to fix this general problem? Or is this like asking for a pill that cures any disease instantly? Please don't get hung up on the fact that I mentioned PHP. I'd say that this question goes completely regardless of the programming language or environment. It's really more of a philosophical question than a technical one IMO. I fear that the answer will be: "There is no way. You have to bite the bullet and verify, verify and verify everything all the time!" | An improvement would be to design your system to fail gracefully. If the first step of parsing a file fails, then stop with an error. Don't carry on passing bad data from one step to the next. The other thing to check is that you are implementing the file handling correctly and robustly. CSV is quite complicated when you encounter quoted strings with embedded commas in them. If the supplier has actually changed the file format, then you should stop processing. If they have used a feature of CSV that you haven't implemented right, you need to fix that robustly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/419570",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/379490/"
]
} |
419,917 | While I am reading The Pragmatic Programmer e2 , I came across Tip 38: Crash Early . Basically, the author, at least to my understanding, advises to avoid catching exceptions and let the program crash. He goes on saying: One of the benefits of detecting problems as soon as you can is that
you can crash earlier, and crashing is often the bet thing you can do.
The alternative may be to continue, writing corrupted data to some
vital database or commanding the washing machine into its twentieth
consecutive spin cycle. Later he says: In these environments, programs are designed to fail, but that failure
is managed with supervisors. A supervisor is responsible for running
code and knows what to do in case the code fails, which could include
cleaning up after it, restarting it, and so on. I am struggling to reflect that into real code. What could be the supervisor the author is referring to? In Java, I am used to use a lot of try/catch. Do I need to stop doing that? And replace that with what? Do I simply let the program restart every time there is an exception? Here is the example the author used ( Elixir ): try do
add_score_to_board(score);
rescue
InvalidScore
Logger.error("Can't add invalid score. Exiting");
raise
rescue
BoardServerDown
Logger.error("Can't add score: Board is down. Existing");
raise
rescue
StaleTransaction
Logger.error("Can't add score: stale transaction. Existing");
raise
end This is how Pragmatic Programmers would write this: add_score_to_board(score); | Basically, the author, [...] advises to avoid catching exceptions and let the program crash No, that is a misunderstanding. The recommendation is to let a program terminate its execution ASAP when there is an indication that it cannot safely continue (the term "crash" can also be replaced by "end gracefully", if one prefers this). The important word here is not "crash", but " early " - as soon as such an indication becomes aware in a certain part of the code, the program should not "hope" that later executed parts in the code might still work, but simply end execution, ideally with a full error report. And a common way of ending execution is using a specific exception for this, transport the information where the problem occurred to the outermost scope, where the program should be terminated. Moreover, the recommendation is not against catching exceptions in general. The recommendation is against the abuse of catching unexpected exceptions to prevent the end of a program. Continuing a program though it is unclear whether this is safe or not can mask severe errors, makes it hard to find the root cause of a problem and has the risk of causing more damage than when the program suddenly stops. Your example shows how to catch some severe exceptions, for logging. But it does not just continue the execution, it rethrows those exceptions , which will probably end the program. That is exactly in line with the "crash early" idea. And to your question What could be the supervisor the author is referring to? Such a supervisor is either a person , which will deal with the failure of a program, or another program running in a separate process, which monitors the activity of other, more complex programs, and can take appropriate actions when one of them "fails". What this is precisely depends heavily on the kind of program, and the potential costs of a failure. Imagine the failure scenarios for a desktop application with some GUI for managing address data in a database a malware scanner on your PC the software which makes the regular backups for the Stack Exchange sites software which does automatic high speed stock trading software which runs your favorite search engine or social network the software in your newest smart TV or your smartphone controller software for an insulin pump controller software for steering of an airplane monitoring software for a nuclear power plant I think you can imagine by yourself for which of these examples a human supervisor is enough, or where an "automatic" supervisor is required to keep the system stable even when one of its components fail. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/419917",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/176425/"
]
} |
420,063 | This is a bit of an invented example but I think it best illustrates my question: Say I'm creating a chess replay event API. Say I have a lot of different "events" I want to keep track of, in a specified order. Here might be some examples: A move event — this contains the previous and new square. A timer event — this contains the timestamp that the timer was toggled between players A chat message event — this contains the player ID , the message and time sent ...etc. The point is that the data model for each event is very different — there isn't much of a common interface. I want to design an API that can store and expose essentially a List<Event> to a client who can choose to process these different events as they wish. We don't know what clients will do with this information: Perhaps one client may need to do text analysis on the ChatMessageEvent s, and one may consume and replays these events in the UI. The challenge is that ordering between events must be preserved, so I can't separate by methods like getMoveEvents and getTimerEvents since a TimerEvent can happen between move events and the client may need that information. I could expose a visitor to allow clients to handle each event type differently in the list, but I'm wondering if there's a better way to handle a situation like this. Edit: I want to design this with one main priority: provide clients with an easy and flexible way to iterate through these events. In an ideal scenario, I would envision the end user writing handlers to the event types they care about, and then be able to iterate through without casting based on the runtime type. | I am under the strong impression you are overthinking this. The challenge is that ordering between events must be preserved, so I can't separate by methods like getMoveEvents and getTimerEvents Then simply don't offer such methods in your API. Let the client filter out the events they need, and do not implement anything in your API which could become error prone. I could expose a visitor to allow clients to handle each event type differently in the list This sounds overengineered. You described the requirement as getting something like a List<Event> , containing recorded events. For this, a simple method List<Event> getEvents() would be totally sufficient (maybe an IEnumerable<Event> would be enough). For reasons of efficiency, it may be necessary to offer some methods for restricting the result set to certain conditions. but I'm wondering if there's a better way to handle a situation like this Asking for a "better" (or "best", or "correct") approach is way too unspecific when you don't know any criteria for what you actually mean by "better". But how do find criteria for what is "better"? The only reliable way I know for this problem is: Define some typical use cases for your API! Do this in code. Write down a short function which tries to use your API, solving a real problem you know for sure the clients will encounter (even if the API does not exists or is not implemented yet). It may turn out the client will need something like a property to distinguish event types. It may turn out the client needs something to get only the events from the last hour, or the last 100 events, since providing him always a full copy of all former events may not be effcient enough. It may turn out the client needs to get a notification whenever a new event is created. You will only be able to decide this when you develop a clear idea of the context in which your API will be used. If you add some code to this function which verifies the API's result, and place this code into a the context of a unit testing framework, then you are doing "Test Driven Development" But even if you don't want to use TDD or don't like TDD, it is best to approach this from the client's perspective . Don't add anything to your API where you have doubts if there will ever be a use case for. Chances are high noone will ever need that kind of function. If you don't know enough about the use cases of the API to use this approach, you will probably do some more requirements analysis first - and that is something we cannot do for you. Let me write something to your final edit, where you wrote and then be able to iterate through without casting based on the runtime type. Casting based on the runtime type isn't necessarily an issue. It becomes only a problem when it makes extensions to the Event class hierarchy harder, because existing Client code would be forced to change with each extension. For example, let's say there is client code handling all chat events by a type test plus a cast for ChatEvent . If a new event type is added which is not a chat event, existing code will still work. If a new chat-like event is added, as a derivation of ChatEvent , existing code will also still work as long as the ChatEvent type conforms to the LSP. For specific chat events, polymorphism can be used inside the ChatEvent part of the inheritance tree. So instead of avoiding type tests and casts superstitiously under all circumstances, because you have read in a text book "this is generally bad", reflect why and when this really causes any problems . And as I wrote above, writing some client code for some real use cases will help you to get a better understanding for this. This will allow you also to validate what will happen when your list of events get extended afterwards. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/420063",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/209523/"
]
} |
420,076 | I want to lead an Angular + NodeJS project. As this is my first experience, I thought about using UML diagram as both designing/architecting the project and also the project's documentation. But I am not sure if using UML diagrams is a good idea? I mean it takes longer time and many details that I don't know if is it necessary to writing them or not? I say this because in previous projects that we worked on, we were just talking about things we want and each member could do the task by his/her understanding, but at this time I decided to do the next project more organized with more developers, but don't know how much of detail or abstraction is enough and from where it's redundancy and not necessary? I also though if I use UML diagrams in designing/architecting stage, we can also use this design as our project's documentation too and we won't read much documentations in the future, but I don't know if I think true or not? I should mention we are a startup like company with 4-5 guys looking for extending up to 10 or more developers. | I am under the strong impression you are overthinking this. The challenge is that ordering between events must be preserved, so I can't separate by methods like getMoveEvents and getTimerEvents Then simply don't offer such methods in your API. Let the client filter out the events they need, and do not implement anything in your API which could become error prone. I could expose a visitor to allow clients to handle each event type differently in the list This sounds overengineered. You described the requirement as getting something like a List<Event> , containing recorded events. For this, a simple method List<Event> getEvents() would be totally sufficient (maybe an IEnumerable<Event> would be enough). For reasons of efficiency, it may be necessary to offer some methods for restricting the result set to certain conditions. but I'm wondering if there's a better way to handle a situation like this Asking for a "better" (or "best", or "correct") approach is way too unspecific when you don't know any criteria for what you actually mean by "better". But how do find criteria for what is "better"? The only reliable way I know for this problem is: Define some typical use cases for your API! Do this in code. Write down a short function which tries to use your API, solving a real problem you know for sure the clients will encounter (even if the API does not exists or is not implemented yet). It may turn out the client will need something like a property to distinguish event types. It may turn out the client needs something to get only the events from the last hour, or the last 100 events, since providing him always a full copy of all former events may not be effcient enough. It may turn out the client needs to get a notification whenever a new event is created. You will only be able to decide this when you develop a clear idea of the context in which your API will be used. If you add some code to this function which verifies the API's result, and place this code into a the context of a unit testing framework, then you are doing "Test Driven Development" But even if you don't want to use TDD or don't like TDD, it is best to approach this from the client's perspective . Don't add anything to your API where you have doubts if there will ever be a use case for. Chances are high noone will ever need that kind of function. If you don't know enough about the use cases of the API to use this approach, you will probably do some more requirements analysis first - and that is something we cannot do for you. Let me write something to your final edit, where you wrote and then be able to iterate through without casting based on the runtime type. Casting based on the runtime type isn't necessarily an issue. It becomes only a problem when it makes extensions to the Event class hierarchy harder, because existing Client code would be forced to change with each extension. For example, let's say there is client code handling all chat events by a type test plus a cast for ChatEvent . If a new event type is added which is not a chat event, existing code will still work. If a new chat-like event is added, as a derivation of ChatEvent , existing code will also still work as long as the ChatEvent type conforms to the LSP. For specific chat events, polymorphism can be used inside the ChatEvent part of the inheritance tree. So instead of avoiding type tests and casts superstitiously under all circumstances, because you have read in a text book "this is generally bad", reflect why and when this really causes any problems . And as I wrote above, writing some client code for some real use cases will help you to get a better understanding for this. This will allow you also to validate what will happen when your list of events get extended afterwards. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/420076",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/335816/"
]
} |
420,520 | I've quite frequently seen benchmarks where the tester discarded the highest and the lowest time out of N runs. Discarding the highest time I understand; it's probably high because of some other processes running suddenly demanding more CPU. But doesn't the lowest time indicate the best possible performance when the benchmark was running full tilt without interruptions? | The lowest timing might indeed represent the "true" timing without outside interference, or it might be a measurement error. E.g. the boosting behaviour of a CPU might speed up the first run in a larger benchmarking suite, or a less congested network might speed up a net-dependent benchmark during certain times of day. Similarly, the highest timing might represent the true worst case, or non-representative interference. E.g. a background service might have started during the benchmark, or a SMR hard drive cache is being flushed during an IO-based benchmark. Such interference indicates a flawed experimental design that fails to control for these influences, but it's not always possible or economical to design the perfect experiment. So we have to deal with the messy real-world data that we have. Statistics like the mean (average) of some values is very sensitive to outliers. It is thus common to use a trimmed mean where we remove outliers, in the hopes of getting closer to the "true" mean. Various methods for determining outliers exist, with the simplest approach being to remove the top/bottom p%, for some value p. Another option is to use techniques like bootstrapping that let us estimate how reliable the estimate is: instead of removing top/bottom observations, we remove random observations and repeat the calculations multiple times. However, it is not generally necessary to calculate the mean run time when doing benchmarking. For comparing the typical behaviour, we can use measures like the median or other quantiles. Especially when measuring latencies, quantiles like the 95%-percentile are often used (meaning: 95% of measurements were this fast or faster). It is also unnecessary to calculate the mean when trying to determine whether one program is significantly faster than another. Instead of parametric statistical tests that require such parameters to be estimated from the sample, it is possible to use non-parametric tests. E.g. the Mann–Whitney Rank Sum Test only considers the order of values from two samples, not their actual values. While this is more flexible and more rigorous, this does lose some statistical power (you might not be able to detect a significant difference even if it exists). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/420520",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27426/"
]
} |
420,872 | How does a functional programming language, such as Elm , achieve "No runtime exceptions"? Coming from an OOP background, runtime exceptions have been part of whatever framework that is based on OOP, both browser-based frameworks based on JavaScript and also Java (e.g., Google Web Toolkit , TeaVM , etc. - correct me if I'm wrong though ), so learning that functional programming paradigm eliminates this is big. Here's a screen grab from NoRedInk's presentation on Elm, showing the runtime exceptions from their previous JavaScript-based code to the new Elm codebase: How does the functional paradigm or programming approach eliminate runtime exceptions? Are runtime exceptions a great disadvantage of OOP over functional programming? If it is such a disadvantage, why have the OOP paradigm, programming approach, and frameworks been the industry standard? What is the technical reason? And what's the history behind this? | How does a Function Programming, such as Elm, achieve "No runtime exceptions"? That's easy. You simply don't write functions that fail. That might sound simplistic, but that's the gist of it. Take division, for example. We can simply define that anything divided by 0 is 42. Boom. Now, division no longer throws a runtime exception, it just sometimes returns a wrong result. Elm's choice are a little bit more intelligent than that, however: 1 / 0
--=> Inf
-1 / 0
--=> -Inf
0 / 0
--=> NaN Note that this is only one possibility. Another possibility would be to introduce a "nonzero number" type that is distinct from "number", and then the type of / would be (/) : Float -> NonZeroFloat -> Float instead of (/) : Float -> Float -> Float as it is now. A third possibility would be change the return type, for example like this: (/) : Float -> Float -> Maybe Float This means that the function returns "maybe a float" . More precisely, it will either return Just Float or Nothing . Or, if you want some more information , the type could be (/) : Float -> Float -> Result String Float This will return either an Ok Float with the value wrapped into the Ok data constructor or an Err String with a description of the problem wrapped into the Err data constructor. Another example is retrieving a value from a dictionary: what if the key does not exist? Or indexing into an array: what if the index does not exist? Well, both the get function for arrays and the get function for dicts return a Maybe . In some other languages, there is also an additional function called getOrElse which takes an additional argument, and returns that argument if the key is not found. The key point is simply to write your functions in such a way that they never throw an exception and always return a value. Note that this has nothing to do with Functional Programming. You can do this in any language. For example, C also has no runtime exceptions. In C, you use "magic" return values or error codes to signal errors. You could do this in Java as well. In fact, Java ships with an implementation of that Maybe type called java.util.Optional and you can write a similar Result type as well. Go has multiple return values, and it is customary to return an additional error code value from a function. For example, a hypothetical get function for a dictionary would not return item and then maybe return null or crash if the item cannot be found, but rather it would return item, found , where found is a boolean value telling the caller whether the item was found, and you would use it something like this: item, found := dict.get("key")
if (found) {
// do something with `item`
} Coming from a OOP background, runtime exceptions have been part of whatever framework that is based on OOP, both browser based frameworks based on Javascript and also Java (e.g. GWT, TeaVM, etc), correct me if I'm wrong though , so learning that Functional Programming paradigm eliminates this is big. It has nothing to do with Functional Programming. FP certainly helps but is not a requirement. If you write your Java code in such a way that you never return null , never have un-initialized fields, and never throw exceptions, then you can achieve the same thing for your own code . The problem is, of course, that everybody else's code , including the Java SE standard library, still returns null and throws exceptions. So, it is as much about the standard libraries and the discipline of the community as it is about the type system and the language. Of course, there are things the type system and the language can do to help you. For example, it can do exhaustiveness checking , i.e. it can make sure that you always check for both Ok and Err in your code. This is not possible in Java, for example. But again, this has nothing to do with Functional Programming or Object-Oriented Programming. Haskell is actually a very good example: Haskell is a Functional Language and it does have runtime exceptions, but the community simply chooses to never use them, but use Maybe , Error , etc. types instead. How the Functional Paradigm or programming approach eliminates runtime exceptions? It doesn't. Writing code such that it never throws exceptions eliminates exceptions. Are runtime exceptions a great disadvantage of OOP over Functional Programming? They have nothing to do with OOP or FP. If it is such a disadvantage, why is OOP paradigm, programming approach and frameworks have been the industry standard? I would argue they are not the industry standard. While a lot of code is written in Java, C#, etc., the overwhelming majority of that code is not Object-Oriented but rather Structured / Procedural / Modular with Abstract Data Types. What is the technical reason? Most "popular" technologies are not popular for technical reasons. At no point in history were DOS and/or Windows technically superior. They just had brilliant marketing and business-savvy managers. And what's the history behind this? Unix becomes popular, with Unix comes C, C becomes popular even outside of Unix, C++ adds a misunderstood mangled castrated idea of OOP to C, C++ becomes popular, Java kinda-sorta looks like C++ even though it is actually much closer to Objective-C and Smalltalk, IBM goes all-in on Java, and there is the universal truth of IT: "Nobody ever got fired for buying IBM." Is this cynical? Yes, but that doesn't make it untrue. More often than not, the people who decide whether or not to buy some technology do not have the technical expertise to judge whether the technology is actually good or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/420872",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/382819/"
]
} |
421,079 | I am developing code mainly using Bash, C, Python and Fortran and recently also HTML/CSS+JavaScript. My OS is Ubuntu. Maybe I am exaggerating, but I figured that I kind of spend more time getting software (Debian and Python packages mainly, sometimes also from source) to be installed properly than actually developing code. And I am not talking about coding vs. debugging, debugging is part of coding for me. It happens so often to me that I update my Linux packages and then my Python packages and my software does not work anymore, because some .so files have another name now, and Python does not find them anymore. Or I setup a totally clean Ubuntu VM, install a package with pip and get two screens of error message, because some debian package was not installed. I am not a system administrator, I enjoy developing software. But this just annoys me. I do not want to inform myself on all the 157 Python packages and thousands of Debian packages I have on my system and know what their dependancies are. I want to write code and implement new functionality into my code. What am I doing wrong? | What am I doing wrong? You're trying to develop in an environment where you're also the sysadmin, devops and the local technical product owner for every pip package you use - and you're assuming that the sysadmin, devops and TPO roles should be no effort just because they're not what you're interested in. Those are paid full-time jobs (ok, maybe not TPO) that people have because they are not trivial. Maintaining up-to-date development environments can be a lot of work. The usual approaches are to work for a large enough organization that it's someone else's job, or to somehow automate it (which is why things like conda and docker exist - although this is still a non-trivial amount of work you'd prefer the person from #1 to do instead) to just update infrequently Specifically, you have two different package managers (apt and pip) that don't know much about each other and aren't co-ordinated. I'd recommend you: get a working initial development environment choose some way to be able to clone that environment when you want a new VM (or docker or other) container starting at a working baseline don't update it at all unless there's a specific feature or security update you want don't update it when you actually want to be developing, because you'll get frustrated whenever it doesn't work instantly ideally perform updates in a clone, so you can give up and get back to developing in a working environment if it is more broken than you can face fixing right away | {
"source": [
"https://softwareengineering.stackexchange.com/questions/421079",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/383168/"
]
} |
421,566 | I'm trying to understand whether the Haskell and C++ communities mean different things by the word "functor", or if there's some underlying concept that unifies the two meanings. My understanding is that: For Haskell, a functor is a structure/container that can be mapped over, i.e. a function may be applied to the values held within the structure/container without changing the (uh!) structure of the structure/container. For C++, a functor is simply a class supporting operator() ; what one might refer to as a callable in Python. Are those definitions correct? If so, these seem opposite (but related) definitions to me; one is something a function is applied to, and the other is the function being applied. | The two meanings are unrelated. The Haskell community (and really the Functional Programming community in general, and even the general programming community beyond FP) uses the term Functor in the sense of Category Theory . That's the same branch of mathematics where also the concepts of Monads , Duality , Arrows , and many other come from. The C++ community is one of only a few programming communities that does not use this meaning. There, as you correctly identified, it simply means an object that has an operator () , and as such is comparable to a callable in Python , an object with an apply method in Scala , or an object which responds to a call message in Ruby. Another example is Standard ML, where a Functor is a concept in its Module System representing the concept of Parametric Modules: SML's module system consists of Structures (which other module systems would call "module implementations"), Signatures (which is just what it sounds like: the public module interface specification), and Functors (kind of like module-level functions from structures to structures, similar to how a type constructor ("generic" if you speak Java or C#) is kind of like a type-level function from types-to-types). So, Functors are how SML does Parametric Modules which is something that not many module systems have. They are sorta related to the Category Theoretical notion . The term also exists in Prolog, but I don't know much about Prolog, so I will just leave you with this: In Prolog, the word functor is used to refer to the atom at the start of a structure, along with its arity, that is, the number of arguments it takes. […] The term functor is used in a different sense in mathematics and in functional programming, and a different way again in philosophy. [Source: http://www.cse.unsw.edu.au/~billw/dictionaries/prolog/functor.html] | {
"source": [
"https://softwareengineering.stackexchange.com/questions/421566",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/383963/"
]
} |
421,683 | Presuming that I have written some sequential code where it can be broken down into multiple isolated tasks, it seems it might be efficient for concurrency to be introduced. For example print(expensive_calc_one() + expensive_calc_two()) Asuming expensive_calc_one and expensive_calc_two are pure functions but also computationally quite expensive, it would seem sensible for a compiler to optimise the code by introducing concurrency, and allowing the two functions to run in parallel. I know that this also has its downsides (context switching adds overhead, and some computers still only have one logical core). Are there any compilers which would introduce concurrency into previously non-concurrent code, and are there any general patterns for it (or reasons not to do it)? | Asuming expensive_calc_one and expensive_calc_two are pure functions Unfortunately, determining whether a function is pure is equivalent to solving the Halting Problem in the general case. So, you cannot have an Ahead-of-Time compiler which can in the general case decide whether a function is pure or not. You have to help the compiler by explicitly designing the language in such a way that the compiler has a chance to decide purity. You have to force the programmer to explicitly annotate such functions, for example, or do something like Haskell or Clean, where you clearly isolate side effects using the type system. but also computationally quite expensive Unfortunately, determining in an Ahead-of-Time compiler whether a function is "computationally quite expensive" is also equivalent to solving the Halting Problem. So, you would need to force the programmer to explicitly annotate computationally expensive functions for the compiler to parallelize. Now, if you have to force the programmer to explicitly annotate pure and computationally expensive functions as candidates for parallelization, then is it really automatic parallelization? Where is the difference to simply annotating functions for parallelization? Note that some of those problems could be addressed by performing the automatic parallelization at runtime. At runtime, you can simply benchmark a function and see how long it runs, for example. Then, the next time it is called, you evaluate it in parallel. (Of course, if the function performs memoization, then your guess will be wrong.) Are there any compilers which would introduce concurrency into previously non-concurrent code Not really. Auto-parallelization has been (one of) the holy grail(s) of compiler research for over half a century, and is still as far away today as it was 50–70 years ago. Some compilers perform parallelization at a very small scale, by auto-vectorization, e.g. performing multiple arithmetic operations in parallel by compiling them to vector instructions (MMX/SSE on AMD64, for example). However, this is generally done on a scale of only a handful of instructions, not entire functions. There are , however, languages where the language constructs themselves have been designed for parallelism. For example, in Fortress, a for loop executes all its iterations in parallel. That means, of course, that you are not allowed to write for loops where different iterations depend on each other. Another example is Go, which has the go keyword for spawning a goroutine. However, in this case, you either have the programmer explicitly telling the compiler "execute this in parallel", or you have the language explicitly telling the programmer "this language construct will be executed in parallel". So, it's really the same as, say, Java, except it is much better integrated into the language. But doing it fully automatically, is near impossible, unless the language has been specifically designed with it in mind. And even if the language is designed for it, you often have the opposite problem now: you have so much parallelism that the scheduling overhead completely dominates the execution time. As an example: in Excel, (conceptually) all cells are evaluated in parallel. Or more precisely, they are evaluated based on their data dependencies. However, if you were to actually evaluate all formulae in parallel, you would have a massive amount of extremely simple parallel "codelets". There was, apparently, an experiment in having a Haskell implementation evaluate expressions in parallel. Even though the concurrent abstraction in Haskell (a "spark") is quite lightweight (it is just a pointer to a "thunk", which in turn is just a piece of un-evaluated code), this generated so many sparks that the overhead of managing the sparks overwhelmed the runtime. When you do something like this, you essentially end up with the opposite problem compared to an imperative language: instead of having a hard time breaking up huge sequential code into smaller parallel bits, you have a hard time combining tiny parallel bits into reasonably-sized sequential bits. While this is semantically easier, because you cannot break code by serializing pure parallel functions, it is still quite hard to get the degree of parallelism and the size of the sequential bits right. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/421683",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/384179/"
]
} |
421,756 | I have trouble reconciling "best practices" and real-world approaches to handling exceptions. In my day to day routine, I find myself running into the following examples: try:
do_something()
except Exception:
log_error() and even try:
do_one_thing()
try:
do_another_thing()
except Exception:
repair_stuff()
log_error()
do_more_risky_stuff()
except:
log_error() The most obvious thing is catching the generic Exception type, which is a recurring theme in all "don't do this" programming books/articles on the subject. Furthermore, the nested example - I find it unreadable (or at least "could-be-more-readable"). Finally, having try..except blocks littered everywhere seems... plain wrong. I'm aware that I could be just beating a dead horse here. I have brought up my concerns to my lead and they haven't exactly been welcomed. Not that they've been unreasonably dismissed (to my perception), it's rather that I can't offer any better approach. So I have several questions on the matter: Is catching generic exceptions that wrong an approach? Had a lot of cases (been burnt trying to catch specific ones) where we did not know what to anticipate, while the behavior would be the same for all, e.g. log and continue with execution. Wrapping everything in try..except: log blocks is code repetition. Or is it? Solving this in any way that would not seem like over-engineering is beyond me. Handling nested try blocks could maybe be solved by separating them to their own scope (e.g. a function), however this hasn't proved itself as a reliable solution as oftentimes the caller might be desiring a different result on exception (empty result, alternative result, the exception itself etc...) | we did not know what to anticipate, while the behavior would be the same for all, e.g. log and continue with execution. To me, this is your major problem. If an exception occurs and you don't know how to handle it, you should not continue with execution because you don't know what state your system is in. Just pass it up to the next level and let that handle it; at the very top level, you may want to log out the exception, but that should be at the top level, having aborted any other work that was in progress. At this point, you lose most of your repeated exception handling because it's all just handled at the top layer. If there are specific exceptions you do know how to recover from, then absolutely catch the specific exception and have appropriate recovery logic. But don't try and do it for Exception because you don't actually know what went wrong in that case, so you can't correctly recover from it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/421756",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/214705/"
]
} |
421,991 | I'm new to PHP and JS and I'm currently working on a gym management web app for our project in school. I'm adding a bit of QR functionality that sees if the user is eligible to enter the gym or not by checking if the user has paid. I want to know if it is good to escape PHP code in inline JS, or would it be better to get the id via ajax and store it into a js variable? <script>
// ...
new QRCode(document.getElementById("qr-code"),
"./functions/check_if_paid_qr.php?id=<?php echo $_SESSION["member_id"] ?>");
// ...
</script> I'm using qrcode.js btw. | No, it's usually a bad practice. The problem Any time you echo something from PHP into JavaScript what's happening is that you're trying to generate valid JavaScript code. There is no generic guarantee that you do produce syntactically and semantically valid JavaScript code that also does what you want. It might be easier to guess whether individual instances of code are going to work but it's definitely not a guarantee. Take for example the following code var someVariable = '<?php echo $someOtherVariable ?>'; This seems like it should work. And indeed it will if $someOtherVariable doesn't contain a newline or a single apostrophe. However, if it contains O'Brian or Hello\nWorld the generated JavaScript code would be invalid in either case: Early terminating of a string literal leads to invalid code after it: var someVariable = 'O'Brian'; Invalid multiline string: var someVariable = 'Hello
World'; Looking at the code and determining whether the code is correct right now and will remain correct becomes very hard. What if the format of the data you're echoing changes? What if you get some data you didn't expect? To generalise, the issue is that you don't have a complete JavaScript source code. The source code is only complete when a user visits the page and the backend produces it for them. Until then it's in limbo and it's unknown whether it will work. Impeded code analysis Not only is it hard for humans to determine how a code would behave, but automated tools that are there to help you might also suffer. Some examples Syntax highlighters may break because of the mix of the two languages. This is often the first line of defence against defective code. For example, look at the line that says 'O'Brian' - you'd see that the highlighting is inconsistent between 'O' and Brian'; . Tools that analyse code for correctness like ESLint or Tern.js among others will not be able to analyse code that's not there. Is var someVariable = '<?php echo $someOtherVariable ?>")'; syntactically correct JavaScript? We, as humans, cannot say, an automated tool that merely follows some rules is completely unable to guess what the generated code would be. Tools that extract code metrics would similarly have a problem as they may not be able to parse the real JavaScript code produced. Hard to test code Automatic testing also suffers when you mix the two languages. You can test the code but you need to first need to boot up a PHP environment with enough data in order to generate you JavaScript code and then run tests on the JavaScript code. This is a full integration test with a lot of edge cases to cover and situations to account for. Unit test that focuses on only JavaScript and only PHP would be vastly simpler and you can make sure each fulfils their part of the contract first before checking how they work together. Hard to debug What all the above means is that when something happens that breaks JavaScript, you wouldn't be likely to know or even suspect. It's only going to break for some users only some of the time. How many would report it and how accurate the reports would be would vary but in my experience - don't expect much. So, if you'd know that something doesn't work is questionable to begin with. Moreover, even if you do find out that it doesn't work, you'd now have to track down which mixed JavaScript+PHP line is it. Unless there is a single one, you'd need to spend a non-zero time of investigation to find where it goes wrong. And another non-zero amount of time to find why . All that would likely happen after you've developed the application. Maybe a week, maybe a year. In the best case scenario it was you who wrote the code, so while it's still going to be quite hard, you might have some idea about where to start. However, you might have inherited this code. Or somebody else could have inherited it from you. Bundling Modern JavaScript is often passed through tools to produce a compact set of files from it. The bundling process will read the JavaScript source and produce a minified version of it. This can suffer if the JavaScript source is incomplete as the compilation happens before any user has ever interacted with the site. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/421991",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/384719/"
]
} |
422,047 | Simply put, I have date and time attributes on an Orders table. The data types for these attributes are MySQL's DATE attribute. But everytime I echo the date and time in PHP it just gives me a string, not a date object. I don't understand as to why the DATE data type exists if you can casually store date objects as VARCHAR in the db. $date = date("M d, Y");
$time = date("H:i A"); | I don't understand as to why the DATE data type exists Always store Date values in Date Fields. What PHP gives you back when you retrieve those values is a Character Representation of the Date value but, within the database, you can (and should) perform Date operations on Date fields that you simply cannot do effectively with Strings. For example, let's look at some similar-looking entries: select
char_date
, date_date
from table1
order by char_date ;
+-------------+-------------+
| char_date | date_date |
+-------------+-------------+
| 01-Mar-2021 | 01-Mar-2021 |
| 12-Feb-2021 | 12-Feb-2021 |
| 23-Jan-2021 | 23-Jan-2021 |
+-------------+-------------+
select
char_date
, date_date
from table1
order by date_date ;
+-------------+-------------+
| char_date | date_date |
+-------------+-------------+
| 23-Jan-2021 | 23-Jan-2021 |
| 12-Feb-2021 | 12-Feb-2021 |
| 01-Mar-2021 | 01-Mar-2021 |
+-------------+-------------+ See the difference? And because applications tend to apply far less control over how Character data is entered, your char_date data very quickly gets awfully messed up and even more difficult to interpret. For example, when is '01/04/07'? January 4th? April 1st? April 7th (2001)?
Depends where you are in the world! OK, you could say that your application will convert the entered, Character Representation of a Date value into a proper Date and store a consistently-formatted [Character] version of that but, as soon as you start doing Date operation on those values, your database has to start doing Date conversions on the fly , which can be horribly slow. Use the Right Tool for the Right Job ... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/384812/"
]
} |
422,110 | I am working on a feature with a system that I am unfamiliar with. The feature is not ready, but I want to show the code to my team (who is familiar with the system) so they can give me early feedback. We are a fully remote team. Making a pull request on GitHub so they can see the differences seems like the easiest way to do this. But then I will be creating a pull request that is not ready to be merged. This sounds dirty to me for various reasons, such as it muddies up the PRs, and someone may accidentally merge it. I could just point them to a branch, but then they would have to find the diffs themselves, which not everyone knows how to do. They are also much less likely to review the code at all. I am hoping to let them review the code on their own time, instead of setting up yet another meeting. Is there a standard for getting early feedback on work-in-progress features? | GitHub allows for PR to be in a "draft" state. Your team can see the differences, and even comment on it, but it's still obviously a work-in-progress, and cannot be merged until you click a "ready for review" button, which makes it mergeable. I'd also say that if it's a work-in-progress, give them a clue as to what you want them to focus on, such as saying "I'm most concerned about the payment processing algorithm in SomeClass". That way they don't spend time reviewing other parts of the code that are subject to change even as they review. See Draft Pull Requests and Convert Pull Request to Draft . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422110",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/112298/"
]
} |
422,210 | In The Pragmatic Programmer , the authors write: One of the benefits of detecting problems as soon as you can is that you can crash earlier, and crashing is often the best thing you can do. The alternative may be to continue, writing corrupted data to some vital database or commanding the washing machine into its twentieth consecutive spin cycle. ...when your code discovers that something that was supposed to be impossible just happened, your program is no longer viable. Anything it does from this point forward becomes suspect, so terminate it as soon as possible. To what extent does this principle apply in the context of GUI applications? That is, is the best course of action when faced with an unanticipated exception or an assertion failure to terminate the GUI program (possibly with an appropriate error messages to the user). What are the trade offs involved in applying it or not applying it? What about single-page javascript applications? For example, terminating the page (or perhaps prompting to refresh?) when an uncaught promise rejection is detected. | Quoting the same passage from the book (emphasis mine): One of the benefits of detecting problems as soon as you can is that
you can crash earlier, and crashing is often the best thing you can
do. The alternative may be to continue, writing corrupted data to some
vital database or commanding the washing machine into its twentieth
consecutive spin cycle. ...when your code discovers that something that was supposed to be impossible just happened, your program is no longer viable. Anything
it does from this point forward becomes suspect, so terminate it as
soon as possible. When a programmer uses an assertion, they're saying "This should never happen." Normally, terminating the program under these conditions is an appropriate response, especially since the programmer's assertion has been violated for unknown reasons. This is as true of a program with a GUI as it is for a console program or service. For normal exceptions, the question becomes the same as it's always been: can we meaningfully recover from this exception? That depends; did the exception occur during a write to a critical database, or did the user simply give us a file name that does not exist? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422210",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/69004/"
]
} |
422,378 | I think many people with even a small experience in designing UI/UX to handle user data will be familiar with the perils of putting in input field/database limits for personal data, such as names. However, when it comes to storing biometric data, such as in medical/patient management software, I might've assumed that there was some validation on input given the intended use-case! That seems as though it might not always be the case, having recently seen this tweet , in which someone was invited for their COVID-19 vaccine prematurely, apparently due to his GP surgery storing his height as 6.2cm, giving a BMI of 28,000. Questions: Is this just a flaw in their particular software? Is it possibly just the case that many of these systems were never intended for the mass selection of patient groups? Or are there valid reasons that you might not want to introduce input ranges and sanity checks to biometric data? Colour me only mildly concerned, given the AI-based future of medical decision making! | Is it possibly just the case that many of these systems were never intended for the mass selection of patient groups? This has absolutely nothing to do with it. Even if this software was only used in scope of retaining patient information when the patient visits their GP, the calculated BMI would've been incorrect. The issue with mass harvesting of data is that people don't invest any time in looking at specific entries anymore, and therefore they don't see obviously wrong data. In comparison, that doctor who looks at the patient info for the patient who is in front of them will notice that 28000 number. Is this just a flaw in their particular software? If the software was never required to put boundaries on data input, then not having boundaries isn't a flaw in the software. At best, it's a flaw in the requirements. The 28000 also wasn't a bad calculation either. It was a correct calculation based on the data that was input. You cannot blame a calculation for the correctness of its input, or what I like to refer to as "shit goes in, shit comes out". So you want to limit the height input then (and weight, but let's focus on height for now). What should the minimum limit be? Well, the shortest person recorded is about 62 cm. But what about when that record is broken? Because most records tend to get broken once in a while. Also, babies are generally 50cm, so maybe that's where the limit should be. But what about premature babies? Even only accounting for the viable range of premature births who have a reasonable chance at survival (which is 24 weeks), they can be as small as 22cm. So if you want to account for all humans, we could argue that 22cm is a reasonable minimum boundary. You should already notice that 22cm is still close to the 6.2cm figure we started with. I reverse engineered your example. For a 28000 BMI and a height of 6.2cm, you'd need to weigh about 108kg. But even if you disallow this height, yet still allow a height of 22cm, that still leads to a BMI of 2231.4. The BMI data is still nonsensical, even though both input values are within their individual normal ranges . We established that a height of 22cm is possible, and a weight of 108kg is also realistic. Your question is built on the assumption that such data validation would be trivial to implement without fault. The above calculation shows you that this assumption is incorrect. Or are there valid reasons that you might not want to introduce input ranges and sanity checks to biometric data? While people's height and weight isn't going to change overnight, it's generally inadvisable to add more restrictive validation to data than what was asked, based on nothing more than what a developer thinks might be a possible reasonable restriction. For example, my country's license plates used to be of the format AAA-000 (and initially, vanity plates weren't legal). Should software have only allowed this format? Well, it seems like you would have forced that. But when those license plates ran out, we started using 000-AAA . And when that ran out, we've started using 0-AAA-000 . If you had written those validation checks, you would've had to change and redeploy your application every time the format changed. And this is a relevant topic, because that is precisely what happened in my country. They had to manually update thousands of devices (speed cams, parking lot cameras, police vehicle cameras, ...) because they were unable to register these new license plates. Had they not bothered with this format validation, they wouldn't have had to update their software. Given that in this case it was embedded software on devices, having to redeploy is a cumbersome and expensive task. Similar issues could be encountered with: Landlines are 9 digits here, whereas cell phones are 10 digits Postal codes here are 4 digits, but they've introduced 5 digit codes recently House numbers are numeric, but there is a fringe case whereby a property that is split into two properties will get a "A/B/C/..." suffix. So what once was number 1 becomes numbers 1 and 1A. This is not the same as a box (i.e. number 1 box A). For example, we live at address Redacted Street 14A, but the building next door (Redacted street 14) is an apartment building, and labels their apartments A/B/C/... 14A is my house number. 14 box A is the nextdoor apartment on the first floor. You can imagine my frustration whenever I fill out a form and notice that the developers needlessly decided to enforce a numeric format in the number textbox. Colour me only mildly concerned, given the AI-based future of medical decision making! You're putting the cart before the horse here. Even if the patient info registration tool allows for inputting nonsensical data, that doesn't inherently mean that the interpretor of this data must blindly believe anything it is told. If you only could implement one validation, you'd put the validation on the AI , not on the data collection tool. If you blame any mistakes your AI makes on the input data rather than the AI, then your AI isn't an AI, it's just an algorithm. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422378",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/385311/"
]
} |
422,479 | I have started in a new team. I have 20 years experience as a developer, and I have been in the role of a team lead in several projects. Normally I am very much pro code reviews, but I ended up in a team that use TDD up to religious fundamentalism. Mostly this is led by a single senior resource, me being the second senior. The result is that they have implemented a code review process that requires approval for merge. Not only that it requires approval, but it also requires each individual comment being responded. All that is very nice until you start getting pull requests that can not be approved in days with tens of comments each. In addition, when requests are done, the team does not focus on what is IMHO important (patterns, interfaces, encapsulation, layering, and method signatures), but on small details. Example: There is a code convention that methods doing things logically connected should be in close proximity to each other. But then if you actually require that the methods must be ordered by their chronological execution, that goes a bit further than the general rule. No one is just reasoning that if we, in the first place, did not have 50 methods in the same class, the positioning of the methods would possibly not matter that much. Code is just full of examples where developers just go in the nitty picky details instead of focusing on the general problem. Such a heavy code review process in my opinion creates a hostile atmosphere where a newcomer feels simply bad. How can I justify and defend the thesis that: The merge button should be enabled by default. IMO after some iterations, if the team is conscious about quality code and if someone is non-cooperative, the team will kick him/her out. The code review should be a recommendation, but not mandatory. I believe we are all grownups and it is natural to follow good advice. Again, if someone is stupid enough to not follow, in time the team will kick him/her out. The code author should have the right to merge the code within six hours, let’s say, of the pull request creation no matter if there is approval or not. | How can I justify and defend the thesis that: The merge button should be enabled by default The code review should be a recommendation , but not mandatory The code author should have the right to merge the code within 6 hours lets say of the pull request creation no matter if there is aproval or not. I don't think you should try and justify any of those, because they are almost certainly bad ideas. Code review is just about the one thing that has been consistently shown to significantly improve code quality, and you're effectively proposing to stop doing it. Instead, put your efforts into improving your code review process: the team does not focus on what is important(patterns, interfaces, encapsulation, layering, method signature) but on small details. This is the problem you need to fix. Work with your team to improve their abilities, both in writing code and reviewing it. Then you'll have changed a bad process into a good one. Oh, and never, ever use language like "religious fundamentalism" when discussing this. I hope I don't need to explain why. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422479",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/352126/"
]
} |
422,560 | I am currently designing a communication protocol for an embedded system. I've decided that authentication (but not encryption) is important, so I decided to add 4 bytes of truncated HMAC signature to every packet. At the same time, I also want to be able to do data verification to ensure that the data arrives correctly and hasn't been corrupted in transit. So I was thinking of appending a CRC32 to every packet. However, wouldn't the HMAC signature also be good enough for verifying the integrity of the packet? Is there any point in adding both an HMAC signature and a CRC32? Or is an HMAC enough? | The CRC32 does not give you any guarantees that the HMAC does not also give you. Put another way, the HMAC gives you all the guarantees the CRC32 gives you and more: the CRC32 protects against unintentional alteration due to common transmission problems such as noise and interference, the HMAC also protects against intentional alteration. The CRC32, however, may be less compute-intensive to verify. So, if your communication channel is very noisy and your receiver device is CPU-constrained, it may make sense to use the additional CRC32 to quickly throw away corrupted packets without having to verify the more expensive HMAC and only do the expensive HMAC verification on packets you know were at least not corrupted during transmission. This balance may tip, however, if your chosen CPU has built-in acceleration for the cryptographic primitives used in the HMAC. In the particular case of a noisy channel, it would probably make even more sense to use an Error Correction Code or some other mechanism for Forward Error Correction like a Hamming or Reed-Solomon code instead of only a mere Error Detection Code like CRC32. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422560",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124792/"
]
} |
422,637 | I was recently part of a TDD development team. At certain point I realized that there is a design mistake instead of Object Oriented Design and Structural design the problems were solved via statuses. Tens of statuses. For a brief moment I thought - Hmm... maybe I can actualy try to do some modeling and reduce the statuses. But I realize that the amount of generated Unit tests that relay on these statuses is so tremendous. At that point I lost all motivation to actualy perform the change. If there is fundamental weakness in solution but the TDD cycle is already to far gone in the development. How do you do design change in TDD when you have such huge amounts of test that may be dependent on the existing implementation? | How do you do design change in TDD when you have such huge amounts of test that may be dependent on the existing implementation? First, the best approach may be to avoid running into this situation at first hand, and refactor earlier. Often people forget that refactoring should be done rigorously in each TDD cycle, and design flaws should be removed as soon as they become apparent. Given there are so many tests as you said, I am pretty sure the design flaw could have been spotted much earlier, at a point in time when there existed a lot fewer tests. But what can you do if you have already painted yourself into that corner? The problem here is usually the amount of test code which uses the public-facing API of the Subject-Under-Test, which is what you want to change. So as an approach to resolve that problem, try to reduce the number of direct test calls to the public API by refactoring the tests themselves first and make them more DRY. build an anti-corruption layer, or a facade, or a proxy between your tests and the SUT, so you can change the API of the SUT without having to change too many parts of your tests. That will allow you to keep the tests as they are for now. Later, when you have some time for cleaning up, you may decide to migrate the tests to the new API one-by-one. The latter approach is also known as strangler pattern and can often be used to gradually swap out legacy components by components with a new design, not only for tests. As an example, if you have 50 tests calling the same public method of a class (and maybe only one or two places in your production code calling the same method), then the tests seem to hinder you changing the signature of that method. But if most of the direct calls inside the test code are rerouted through a single helper method, maybe one which also does parts of the arrange-act-assert work, it will become a lot simpler to change the method signature of the SUT. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422637",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/314234/"
]
} |
422,907 | Is it an anti-pattern or code smell to put "general use" functions (examples below) into a catch-all file named "helpers" or "utils"? It's a pattern I've seen quite a lot in the JavaScript world, which I've also seen with other languages. By "general use" functions, I mean functions which abstract away some common, shared functionality used in an application/library and make them available for use in a general way. Some examples I've seen include returning a copy of an object with some keys omitted, transforming all null-ish values in a JavaScript object (eg. "" , null , {} , [] ) to undefined , constructing a URL from a struct of parameters, transforming strings in some fashion, etc, etc. I often come across applications or libraries with one (or more!) util.js or helpers.ts files which just seem to be, in my opinion, a dumping ground for unrelated functions. In my opinion, code is more readable and discoverable if it's named semantically. If working with the examples above, I'd place them all in their own files ( omit.js , null-to-undefined.js , url-builder.ts , etc), or if they are related functionality, group them (eg, deepClone , withoutKeys , shallowClone , in clone.js or similar). I struggle to articulate why this seems like a code smell or anti-pattern to me. I think there are various things at play: creating the wrong abstraction or abstracting at the wrong time (YAGNI) meaning code is just dumped "somewhere", lack of foresight for future maintainability. On the other hand, I've seen proponents of this pattern argue that it's a well-used pattern, and thus it has merit simply because it's well-used. Does this argument stand? Interesting in anyone's thoughts on this, whether you think it's good or bad, and if you can explain the pros/cons better than I can! | Both Java and C# have the existence proof that this works: the Math class . The danger is the "kitchen drawer" problem, where a class simply gathers new kitchen tools and implements, many of which never get used, or only get used once. So as long as you can keep such classes tightly focused on a theme (like the Math class is), I don't see a problem. In fact, static methods in classes of this kind tend to be easier to write, maintain and test, so long as they're always "pure" methods (i.e. referentially transparent, and you don't store static state in them). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/422907",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/381194/"
]
} |
423,254 | A few days ago I had a conversation with a Civil Engineer with a background in Pascal and BASIC, and we talked about programming in Python. When I was talking, I used the term "code" to refer to a Python program, and he told me that he didn't know what "code" was, that the correct term was "algorithm", and that "code" was something else. I really didn't know how to refute him because the way I see it is that a program can be an "algorithm". I used that term only when I was starting to program simple programs. The word I use most is "code", and on the internet, almost everyone else uses that term. When does something go from being an algorithm to being code, if it can change. Maybe it can be both at the same time? | In short, while there are differences in the specific meaning of the words, that civil engineer was being needlessly pedantic and balking at you not using his preferred word. There was no justifiable reason to disrupt the flow of conversation other than them wanting to be a clever know-it-all. Arguing over the "algorithm" vs "code" moniker is like arguing whether what I'm sitting on right now is "furniture" or a "chair". These are not exact synonyms of one another and in some cases it can be one without being the other, but the specific designation really doesn't matter in scope of the current conversation. An algorithm is defined as : In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning. All code is essentially an algorithm. It's a sequence of well-defined instructions to get the computer to do the thing you want it to do. Can you have code that is not an algorithm? Pedants might argue that declarations (e.g. public class Foo {} ) are not algorithms and only operations (e.g. int c = b + a; can be considered algorithms. I don't quite agree, as the declarations are essential to the well-defined nature of the instructions (as they define the data used in the operations). In essence, if your language's native definition of int is acceptable, then my custom definition of class Foo is as well. I see no reason to distinguish between the two in this regard. Can you have an algorithm that is not code? Yes. Any set of calculation instructions is an algorithm. This could be a handwritten list of steps on how to e.g. calculate the length of the hypotenuse of a right triangle (i.e. Pythagoras' theorem): Square the length of each leg. Add them together. Take the square root. This is not code, but it is an algorithm. The furthest stretch I could give in favor of that civil engineer's argument is that you could argue that a compiled application is still an algorithm but has ceased to be code. But I doubt you were specifically talking about a compiled file, given that Python is an interpreted language, at which point this argument doesn't even apply in the civil engineer's favor. As an aside, while most definitions tend to restrict algorithms to the fields of mathematics and computer science, I personally see no reason why we couldn't consider e.g. a cooking recipe as an algorithm as well. It's still a sequence of well-defined instructions to achieve a specific predetermined outcome. But this is maybe a subjective argument and you might feel differently. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/423254",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/386643/"
]
} |
423,341 | I am a home, amateur developer for 25 years and I just had a bright idea regarding comments. Like all such novel bright ideas, someone has probably already done it and there is probably a consensus on its brightness. I wrote a complex (to me at least) line of code and wanted to clearly comment it so that I can still understand it in a few days: // get all tags for the line, or empty array
// all tags available
// filtered for tags that are set
// do they incude the current tag of the line?
// add the result of include to the container
(note.tags || []).forEach(tag => inc.push(Object.keys(this.allTags).filter(x => this.allTags[x]).includes(tag))) Each comment is vertically aligned with the piece of the line it refers to. Is this something acceptable? The obvious pro is that the comments actually relate to the piece that is being commented. The cons include apocalyptic line reformatting (losing the match between the indentation and the piece being commented) and probably the surprise to the person reading the code if this is not typical. | No, such aligned comments are not a good practice. It is not clear that the comments relate to specific positions on the line. It just looks like very wildly formatted code. The comment's indents will also be removed by an auto-formatter. Therefore, if you want to make comments about a specific part of the line, I'd put the spacing/offset within the comment: (note.tags || []).forEach(tag => inc.push(Object.keys(this.allTags).filter(x => this.allTags[x]).includes(tag)))
//^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^
// | | | | do they incude the current tag of the line?
// | | | filtered for tags that are set
// | | all tags available
// | add the result of include to the container
// get all tags for the line, or empty array However, this is still fairly unreadable because it's a very long line. It is also difficult to maintain, since tiny changes to the line would cause the ASCII art to get out of sync. I sometimes use this strategy for very tricky code,
in particular for documenting complex regular expressions. A better solution is to split this expression over multiple lines, and to potentially use intermediate variables to make the code more self-documenting. The variable names can describe the intent of the value they're holding: const allAvailableTags = Object.keys(this.allTags);
const tagsThatAreSet = allAvailableTags.filter(x => this.allTags[x]);
const tagsOnTheLine = note.tags || [];
// Check if the line tags are set.
// Add the result (true/false) to the `inc` container.
tagsOnTheLine.forEach(tag => {
inc.push(tagsThatAreSet.includes(tag));
}); Note that extracting the constant expressions also happens to avoid the nested forEach/filter loop. WoJ's answer also suggests splitting the expression over multiple lines,
but without introducing extra variables. But personally, I'd write it like this: // Check if this line's tags are set.
for (const tag of (note.tags || [])) {
inc.push(!!this.allTags[tag]);
} This uses the !! boolification pseudo-operator that converts a value to true/false. Equivalently, you could use the Boolean(...) function. Your allTag object also allows easy checks to see whether a tag is set, without having to filter the keyset first. Seeing such connections can be easier when the code is well-formatted. And as a general point, you might re-consider what is worth commenting. Every language has its libraries and idioms (small patterns). It is usually wasted effort to comment things that are very normal in the language, for example the object || default idiom, what the .filter() method does, or how !! works.
Therefore: Avoid writing very clever code – often a for-loop is clearer than functions like filter/map/reduce. Focus comments on the intent, not on the implementation. The why , not the how , because implementation details are easy to look up later. Bookmark a JavaScript reference like MDN to quickly look up language details you're not sure about. Use an editor/IDE such as VS Code that can show a documentation popup when hovering over a function/variable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/423341",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35682/"
]
} |
423,430 | Say I have a have request payload PUT /user
{
email: "invalid"
...
} In the backend there is a email regex, which I cannot modify. Currently the behavior is to output: {
"error": "'email' fails to pass regex '<some_regex_here>'`
} Should I go with existing behavior or change the output response to {
"error": "'email' is invalid"
} | For any error message (and mostly for any message at all), you need to ask yourself: Who is the audience of the message? What can they do about the problem? What information do they need to solve the problem? I would argue that knowing the regex is pretty much useless to the end user, because even if they know what a regex is, it doesn't help them fix the problem: They made a typo; the fact that the email is wrong is enough information for them to take a second look at it. The email is correct; that means the regex is probably wrong. Doesn't help them (the end user) to fix the problem, because they don't have a problem. It is you (the developer) that has the problem. Knowing the regex would allow me to tweak the email address so that it passes the regex, but that makes no sense; if I tweak the email address just so that it passes the regex, it will no longer work for the intended purpose. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/423430",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/277973/"
]
} |
423,621 | Can I get some thoughts on the entity design? Let's say I have an entity called Book. Let's say I create a specific instance of that book. It has a title A and an author B. Since there can be multiple books, should the amount be included in the entity OR should I instead create a separate entry for each of the same book in my database ? So in each case, if the amount of books were 30, database would look respectively: id = 1, title = A, author = B, amount = 30; vs id = 1, title = A, author = B;
id = 2, title = A, author = B;
...
id = 30, title = A, author = B; Which method (if any) is considered a good practice? Since in this case there may be multiple identical books and I may want to update them, having one entry in database which would include the amount seems easier to update than n amount of db entries which only differ by id (every other data for that specific book being identical). I am developing using Java and Spring Boot if that matters. EDIT: all the answers I have received were very helpful to me. Too bad I can't accept all of them as an answer so I will have to go with the seniority! | This is not a question of "good practice", but a question of the requirements of the system. For example: Let's say your system is for a library. If the library has several instances of the same book, each copy will have a individual library id and individual attributes like its age, who borrowed it at what date, which condition the copy has, and maybe some more. Let's say your system is for an online book shop, mainly selling new books. Hundreds of the same copy of the book are sold daily, and the copies have all the same price and are exchangeable. Then the system will probably not give each book an own identity, and it is not important to keep track of each individual item. So storing an amount for each book will probably way more useful within the system. Often people run into the common misconception that there are "best practices" which can replace a thorough requirements analysis - do yourself a favor and don't fall into that trap. Data models are not living in "thin air", one needs context to make the right modeling decisions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/423621",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/387438/"
]
} |
423,680 | I'm reading about software architectures like Hexagonal architecture , Onion architecture etc. They put a big emphasis on decoupling. The business logic sits at the centre and the UI sits on the outside. The idea is that the UI should not touch the business rules at all. It should be totally dumb and should just relay commands and display any updated output. The problem I have is that I find this quite difficult to imagine in practice. The UI will likely implement conditional rendering, and much of that conditional rendering is a business rule in itself. Imagine a shopping cart system. For whatever reason the client decides they do not want promo codes to be added to an empty basket (this maybe isn't a great example but run with me). In your conditional render in the UI you would have to check if items.Count == 0 - and boom, you've just implemented business logic rules in the UI. Or would you have this rule in your DTO with a property called CanUserInputPromoCode ? Even then, the DTO isn't part of the domain logic, is it? Update : this is getting quite a bit of attention. A better use case regarding the promo code would be that users cannot enter any promo codes unless the value of the basket exceeds $50. That's a bit more clear rather than this being solely a UI issue. | When different people talk about decoupling the UI from the business logic, they sometimes mean different things: They can mean not to implement any UI independent logic inside an UI layer - all logic which can be useful outside an UI should be placed somewhere else. Your example shows such a case. CanUserInputPromoCode may be useful out of the UI, or at least not restricted to a specific UI design. The most natural place for CanUserInputPromoCode is probably not a DTO, but a business object Basket . That will allow to reuse it in case the Basket object might get used inside a non-UI process (for example, in an automated test). Or they mean to decouple the system from the specific UI technology . This can be realized by introducing architectures like MVC, or MVP, or MVVM, where there is an extra view model layer or presenter layer, which contains the UI controlling logic , but communicates with the UI through an interface (and so keeps the UI technology exchangeable). Note MVC, MVP or MVVM are not necessarily used for a UI technology exchange. For example in Web applications, designers often want or need the UI controlling logic on the server, whilst the UI uses HTML or Javascript on the client. Or they want UI controlling logic to become subject of an automated test. And yes, UI controlling logic is also "business logic". Your shopping cart example may require a clearly visible "Pay Now" button of a certain size and color at the check out, before any financial transaction takes place. That kind of business logic cannot be decoupled from the UI itself, but that is not meant in Hexagonal architecture or Onion architecture when they speak about decoupling from UI. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/423680",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/217506/"
]
} |
424,964 | I am at my company for half a year now and think that I have gotten a fair idea of their codebase. Initially I didn't dare to form strong opinions, but now I start to feel that the code could benefit from a more structure and more software engineering. My coworkers do a great job of adding new features, creating a cool product and so on. I just feel that there is no second refactoring and cleaning step, just the initial “making it work”. Given that we in the team are all scientists, and it is a growing company with a relatively new and rapidly evolving product, I can see that a flexible prototyping mindset was appropriate. At the university I have seen many projects start and eventually collapse into themselves as the grad student effectively was the project owner, they spend the time during their thesis on “making it work” and cut more and more corners until graduation. The next student would look at their code and usually throw it away. I don't want to see the company plateau in speed, so I believe that continuous refactoring needs to be part of the process. I've just read “Clean Code” (R.C. Martin), and had years ago read “Code Complete” and “Rapid Development” (S. McConnell). In a small side project I have recently performed refactoring via abstraction. I added dependency inversion on an external library and then exchanged that for a different one. It didn't took long and the result feels amazing. Also I have tried to refactor everything I can and I could directly sense the increase in speed going forward. When I cook, I try to clean as I go to have countertops usable. Some of my team members have a different perception on this, just like I did years ago. A decade ago I would laugh about Java, how people specified interface IWidget , class Widget and abstract class AbstractWidgetFactory and class FrobnicatingWidgetFactory when they could just have a class FrobnicatedWidget to start with. I thought that having less lines of code would be more readable in every case. But over the time I have changed a bit, I feel that if there are class TextLogger and class BinaryLogger there may is an unwritten structure with an interface Logger wanting to be made explicit. So in our code (which is Python and C++), I see abstractions and patterns which are implicitly present. And I would like to make them explicit. The co-workers find that adding any encapsulation or standard design patterns to the code explicitly makes it more complicated. Adding another virtual class as the parent supposedly only adds a new class, even though the current classes already have an (implicit) interface. And I want to modularize classes further, they say that increasing the number of classes only increases the complexity. I say that a single class that does too many things is more complicated than the same logic in multiple decoupled classes. But I don't seem to get them to see the things like I do. I am a scientist programmer myself. I have spent years writing code that works and just left it like that. Only over years of being annoyed with not understanding my own code I came to read about actual software engineering. I still don't dare to fully call myself “software engineer”, but I aspire to get there. And from the books I read I have the impression that I am on the right track. But of course I could be wrong. I would really just force everyone to read “Clean Code” and to start think exactly like I do and do as I think would be correct. But of course it doesn't work like that. And it doesn't make sense either; my judgement isn't perfect, my experience limited and so on. Rather, I would like to have a professional discussion but would need to have more convincing arguments for the people who feel that refactoring would slow us down and would make the code more complex. So my questions are: Am I generally on the right track with my perception of conception and mid-term maintenance costs? How can I get buy-in for and cleaner coding from coworkers who have a different perception of software development and mostly focus on getting things working? | The 2 years of experience me was you, but more extreme. I'd always create interfaces for every class, I'd apply any design pattern where I was able to, I'd never inject any concrete implementation, trying to write code that could adapt to anything ever. "You're using a database ? Create a repository, and don't forget the interface !!" I'd say. Now I found things are a lot more complex that I thought. Not every class need an interface. Mostly I like to wait for a good reason to actually do any refactoring. I don't apply design patterns, I just reduce duplication and poof, they "appear". Am I generally on the right track with my perception of conception and mid-term maintenance costs? I'm pretty sure you're not. Clean code and those kinds of books are tools to use, not rules of life to follow. Forcing to refactor after every single code written is a mistake. Refactor when you need to, when you feel like it'll actually help your next you to write that next feature about the same concept a lot faster. "Make the change easy, then make the easy change". Well if you coded something which is kind of the same thing twice already, and you know for sure you'll have one or two other occurrences, refactor this to make the next change easy. More code === more knowledge === better decisions. How can I get buy-in for refactoring and clean coding from coworkers who have a different perception of software development? Find concrete examples that would make their life easier. That's the point after all. You're not showing your code in a Victoria's Secret private show hoping to seduce some people. The point of clean code is to make your life easier. Also, you're talking to scientists. So prove your point. Use logic. That'll also prove to yourself that some "improvements" are actually useless now, and might be tragic later. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/424964",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22584/"
]
} |
424,966 | Assume a device using Raspberry Pi to control some hardware. This diagram tries to clarify the components: SOA concept feasibility is being explored right now. The motivations are: Components in different programming languages Need for maintainable components Need to support desktop, phone, etc. ... Would the SOA concept be a proper option? I'm just curious does anyone has any other architecture style in mind which might be suitable but I'm missing? Thanks. | The 2 years of experience me was you, but more extreme. I'd always create interfaces for every class, I'd apply any design pattern where I was able to, I'd never inject any concrete implementation, trying to write code that could adapt to anything ever. "You're using a database ? Create a repository, and don't forget the interface !!" I'd say. Now I found things are a lot more complex that I thought. Not every class need an interface. Mostly I like to wait for a good reason to actually do any refactoring. I don't apply design patterns, I just reduce duplication and poof, they "appear". Am I generally on the right track with my perception of conception and mid-term maintenance costs? I'm pretty sure you're not. Clean code and those kinds of books are tools to use, not rules of life to follow. Forcing to refactor after every single code written is a mistake. Refactor when you need to, when you feel like it'll actually help your next you to write that next feature about the same concept a lot faster. "Make the change easy, then make the easy change". Well if you coded something which is kind of the same thing twice already, and you know for sure you'll have one or two other occurrences, refactor this to make the next change easy. More code === more knowledge === better decisions. How can I get buy-in for refactoring and clean coding from coworkers who have a different perception of software development? Find concrete examples that would make their life easier. That's the point after all. You're not showing your code in a Victoria's Secret private show hoping to seduce some people. The point of clean code is to make your life easier. Also, you're talking to scientists. So prove your point. Use logic. That'll also prove to yourself that some "improvements" are actually useless now, and might be tragic later. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/424966",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/176220/"
]
} |
426,203 | Companies like Google and Microsoft use identifier-first screens: where you provide your identifier (like an email) before providing the password. Why is this done, is this somehow more secure? I'm setting up a login with Auth0 and identifier-first is one of the options; should I use it? | This is common with federated identity systems where a service authenticates users from many identity providers . Your email address is used to look up which identity provider can authenticate you. This could be a work, school, or personal account. Upon entering your work email, you would be redirected to a URL from your workplace where you enter your credentials before being redirected back to the service. This is also how services allow you to log in via Facebook, Google, and other popular social media networks. Which solution you choose as a service provider depends on your needs. Your will need to evaluate each type and weigh the benefits and drawbacks. No system is perfect. You will need to learn how they work and what their vulnerabilities are. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/426203",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/391828/"
]
} |
429,041 | This doubt is about function doing one thing from Chapter 3: Functions of the book named Clean Code Here Uncle Bob is talking about this function: public static String renderPageWithSetupsAndTeardowns(
PageData pageData, boolean isSuite) throws Exception{
if (isTestPage(pageData))
includeSetupAndTeardownPages(pageData, isSuite);
return pageData.getHtml();
} In this function I can see, it's doing three different things: Determining whether the page is a test page. If so, including setups and teardowns. Rendering the page in HTML. But Uncle Bob says: If a function does only those steps that are one level below the stated name of function, then the function is doing one thing. What does that means? How does this statement proves that the above function is doing one thing? | The fact that you were able to so easily make that list of bullet points is of relevance here :) This question might be considered opinion based, but this is about levels of abstraction . The concept may appear too academic, almost intimidating, but it just refers to the level of detail with which you express things, and the choice of particular detail you include - and this depends on who are you talking to and why. E.g., in an informal scenario, if someone asks you "When will you be available?", there's a difference between you answering "I'll get back to you in a week." and "Oh, man, I'm traveling, I'm going to this rock concert, the stage is within this medieval fortress, and I've heard so many good things from people, it's going to be such a blast, you're never going to believe who's playing [etc., etc.]!". In the second case, the person who asked the question might be able to extract the information relevant to them eventually, but the first level of abstraction is likely preferred - e.g., in a business scenario (where the person is busy and they just want to understand when you'll be available). Similarly, if someone asks you "Where are you going?", but from the context, it's clear to you that they aren't interested in all the detail, you could just say "To a rock concert in Hungary" - thus making a different choice of the relevant details. In software, you're expressing behavior using functions, and you're faced with similar choices, creating conceptual levels of abstraction. This is in part for the benefit of the readers of your code (other programmers, or future you), and in part for organization and maintainability. So, the exact natures of "level of abstraction", and "one thing", or "single responsibility" are necessarily somewhat up to you and depend on the particular problem you're trying to solve with your program. You try to identify different axes of change and divide responsibilities according to that, on a multiple hierarchical levels of abstraction (you may need to continue to refactor towards this as you work on the project - this is not something you're going to get completely right from the very start). Now to the core of your question: In this function I can see, it doing three different things: Determining whether the page is a test page. If so, including setups and teardowns. Rendering the page in HTML. What this function is doing is orchestrating those other functions which are one level of abstraction below. Its job is to put them together and decide what gets called when. Those other functions are doing the actual, individual things. The same idea applies recursively, within those lower-level functions as well. But Uncle Bob says: "If a function does only those steps that are one level below the stated name of function, then the function is doing one thing." I'm pretty sure that's what he means here. As a rule of thumb, if a function is only orchestrating functions that are one abstraction level below it, then there's a good chance it's doing a single thing in this sense. Also, the code is more declarative - you can almost read it as a sentence, as you've demonstrated by creating your list of bullet points to which it maps almost 1-to-1. And it doesn't dig down into the lower level concerns that are not its business. Otherwise you'd have to mentally parse a bunch of if-s or loops and strangely named variables to even figure out what is the high-level thing the function is trying to do - this scenario corresponds to the situation described at the start, where a blabbering answer is given to a higher-level question. For the most part, when someone looks inside a function, they generally want to know what's it for, what its job. It's a high level question. Be deliberate about levels of abstraction and choose good names, and refactor to make sure that they can just read the answer off of your implementation . Don't settle for blabbering functions; your readers should be able to make a sensible bulleted list, just like you have done. If they need more detail, they can dig in, and if your names are good, they'll know there to look. On the next page, Uncle Bob says: "We want the code to read like a top-down narrative". This is, more or less, the idea he's describing there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429041",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/386019/"
]
} |
429,055 | I have single page application with an Angular frontend and a Spring REST-backend. What is the proper way to handle a list of string-based options in the fronted the enduser can select from, which shall all be shown and eventually slightly changes over time? i.e. gender ('Male', 'Female', 'Unisex') or jobs ('Student', 'Teacher', 'Engineer', ...) My approach: Store the lists as a key-value pair, i.e. gender ('Male' 0, 'Female'
1, 'Unisex' 2) and jobs ('Student' 0, 'Teacher' 1, 'Engineer' 2, ...) have an own GET-endpoint for every list (gender, jobs, ...) to fetch the key/value mapping backend only stores the id, mapping from id to value will be done in the frontend with the fetched list Questions: is there a best-practise for this? should I use key/value pairs at all or just store the strings itself as they are presented in the frontend? should I use a GET-endpoint to fetch the lists at all, or should I just hardcode them in the frontend? maybe forget about all this and do it differently? Thanks in advance! | The fact that you were able to so easily make that list of bullet points is of relevance here :) This question might be considered opinion based, but this is about levels of abstraction . The concept may appear too academic, almost intimidating, but it just refers to the level of detail with which you express things, and the choice of particular detail you include - and this depends on who are you talking to and why. E.g., in an informal scenario, if someone asks you "When will you be available?", there's a difference between you answering "I'll get back to you in a week." and "Oh, man, I'm traveling, I'm going to this rock concert, the stage is within this medieval fortress, and I've heard so many good things from people, it's going to be such a blast, you're never going to believe who's playing [etc., etc.]!". In the second case, the person who asked the question might be able to extract the information relevant to them eventually, but the first level of abstraction is likely preferred - e.g., in a business scenario (where the person is busy and they just want to understand when you'll be available). Similarly, if someone asks you "Where are you going?", but from the context, it's clear to you that they aren't interested in all the detail, you could just say "To a rock concert in Hungary" - thus making a different choice of the relevant details. In software, you're expressing behavior using functions, and you're faced with similar choices, creating conceptual levels of abstraction. This is in part for the benefit of the readers of your code (other programmers, or future you), and in part for organization and maintainability. So, the exact natures of "level of abstraction", and "one thing", or "single responsibility" are necessarily somewhat up to you and depend on the particular problem you're trying to solve with your program. You try to identify different axes of change and divide responsibilities according to that, on a multiple hierarchical levels of abstraction (you may need to continue to refactor towards this as you work on the project - this is not something you're going to get completely right from the very start). Now to the core of your question: In this function I can see, it doing three different things: Determining whether the page is a test page. If so, including setups and teardowns. Rendering the page in HTML. What this function is doing is orchestrating those other functions which are one level of abstraction below. Its job is to put them together and decide what gets called when. Those other functions are doing the actual, individual things. The same idea applies recursively, within those lower-level functions as well. But Uncle Bob says: "If a function does only those steps that are one level below the stated name of function, then the function is doing one thing." I'm pretty sure that's what he means here. As a rule of thumb, if a function is only orchestrating functions that are one abstraction level below it, then there's a good chance it's doing a single thing in this sense. Also, the code is more declarative - you can almost read it as a sentence, as you've demonstrated by creating your list of bullet points to which it maps almost 1-to-1. And it doesn't dig down into the lower level concerns that are not its business. Otherwise you'd have to mentally parse a bunch of if-s or loops and strangely named variables to even figure out what is the high-level thing the function is trying to do - this scenario corresponds to the situation described at the start, where a blabbering answer is given to a higher-level question. For the most part, when someone looks inside a function, they generally want to know what's it for, what its job. It's a high level question. Be deliberate about levels of abstraction and choose good names, and refactor to make sure that they can just read the answer off of your implementation . Don't settle for blabbering functions; your readers should be able to make a sensible bulleted list, just like you have done. If they need more detail, they can dig in, and if your names are good, they'll know there to look. On the next page, Uncle Bob says: "We want the code to read like a top-down narrative". This is, more or less, the idea he's describing there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/395568/"
]
} |
429,081 | The commonly endorsed, and considered the most reliable, way of evaluating the security of a program is through examining its source code. That is, this method is based on the fundamental assumption: "what you see is what is run". But if the program's memory has both 'writeable' and 'executable' attributes and the program can compile at run-time and execute arbitrary code, including the code that is not present in the sources - does this commonly used method of evaluating the program's security still hold? Don't we have a tradeoff here - between security and performance? If we could achieve without compilation to machine code, say, 75% of performance that is achieved with JIT - would JIT still be considered a good option? | JIT compilation is risky because of the W^X violation : at runtime, it is possible to generate new code, similar to an eval() in dynamic languages. But being able to dynamically generate executable machine code is not only essential to many high-performance runtimes including OpenJDK, .NET Runtime, and V8 – it's also super useful for malware. However, such a risk doesn't mean a JIT compiler is inherently insecure. The important insight is that just because a JIT compiler can produce native code doesn't mean that this code can now do anything. The JIT compiler can introduce restrictions of its own (e.g. ensuring memory safety), and the code is still limited by the security models of the CPU and the operating system. Thus, JIT compilation is at the heart of many highly secure sandboxes such as V8 or BPF. Also, not every W^X violation is equal. In a security-conscious program, all memory pages are either writeable or executable at any given time, but not both at the same time. A user-space JIT compiler will need to issue syscalls such as mprotect() to change the flags on a page, and these syscalls can be audited and possibly denied. A malware would either need to exploit a bug that introduces a page that is both writeable and executable, or would have to inject code into a writeable page that will later become executable. If the JIT compiler is written carefully – and the mentioned runtimes are incredibly robust and well-tested – such exploitable vulnerabilities will be quite rare. There is definitely a tradeoff between security and performance. However, a security-conscious JIT compiler will not lead to a large loss of security. JIT can however lead to a large gain of performance. In my experience, interpreters are often 10× to 100× slower than native code, but this is highly dependent on the use case and on the granularity of the interpreter. It is correct that JIT compilation makes static analysis on the level of machine code less useful. This might be unacceptable in some settings, for example in a certain app store that wants to review all the code. However, static analysis is inherently limited and often not suitable to provide strong security guarantees. Runtime checks that allowlist permissible operations and deny anything else are much more suitable to limit the behaviour of real-world programs. For example, a browser might sandbox untrusted code in a separate process in which JIT is allowed, but no interaction with the outside world except by sending messages to a supervisor process (e.g. enforcible by seccomp on Linux). Even if the sandbox runs malware, it will not be able to do anything that ordinary non-JIT code wasn't already able to do. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429081",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/395599/"
]
} |
429,092 | In my doctor's appointment booking system, I identified the following entities: Doctor Patient Appointment I also identified an aggregate, which is Doctor (aggregate root) and Appointment. It's an aggregate, as it has to hold an invariant of making sure that appointments assigned to them do not overlap. In pseudocode the model looks as follows: class Patient(id, name, lastname) class Doctor(id, name, lastname, upcomingAppointments: List) class Appointment(id, patient_id, start_date_time, duration) Now, the requirement to my system is: schedule an appointment retrieve all appointments for the given patient Question 1: scheduling an appointment I see 2 ways to model scheduling an appointment: a) Have a Doctor#schedule method, that would return a copy (I strive for immutability) of the Doctor with a new valid (non-operlapping) appointment. Then I'd have a DoctorRepository.update method, that would store this aggregate. Pseudocode: transaction boundary start;
doctor = DoctorRepository.get(doctorId);
doctorWithNewAppointment = doctor.schedule;
doctorRepository.update(doctorWithNewAppointment);
transaction boundary end; but in this way I'd have to update the whole aggregate and also the other appointments, that were not updated. This would be bad performance-wise. b) Have a Doctor#schedule method, that would return just a new valid (non-operlapping) appointment (I strive for immutability in my system). Then I'd have AppointmentRepository#insert to insert this new appointment. Psuedocode: transaction boundary start;
doctor = DoctorRepository.get(doctorId);
newAppointment = doctor.schedule;
appointmentRepository.insert(newAppointment);
transaction boundary end; Which one should I choose? Question 2: retrieving all appointments for the given patient I have a problem, because I've read, that entities, that are referenced from an aggregate cannot be referenced from the outside of the aggregate by other entities. That means (if I understand it correctly), that I cannot retrieve appointments outside of the Doctor aggregate. My requirement says I need to retrieve all appointments for a given patient. Now I have 2 options: a) Have a findAllAppointments(PatientId patientId) method inside DoctorRepository, but is it OK to retrieve entities, that belong to different instances of the same aggregate? b) Have a separate AppointmentRepository with findAll(PatientId patientId) method, but given I have an aggregate, is it fine to have a separate repository for an entity, that is a part of an aggregate? Which one should I choose? | JIT compilation is risky because of the W^X violation : at runtime, it is possible to generate new code, similar to an eval() in dynamic languages. But being able to dynamically generate executable machine code is not only essential to many high-performance runtimes including OpenJDK, .NET Runtime, and V8 – it's also super useful for malware. However, such a risk doesn't mean a JIT compiler is inherently insecure. The important insight is that just because a JIT compiler can produce native code doesn't mean that this code can now do anything. The JIT compiler can introduce restrictions of its own (e.g. ensuring memory safety), and the code is still limited by the security models of the CPU and the operating system. Thus, JIT compilation is at the heart of many highly secure sandboxes such as V8 or BPF. Also, not every W^X violation is equal. In a security-conscious program, all memory pages are either writeable or executable at any given time, but not both at the same time. A user-space JIT compiler will need to issue syscalls such as mprotect() to change the flags on a page, and these syscalls can be audited and possibly denied. A malware would either need to exploit a bug that introduces a page that is both writeable and executable, or would have to inject code into a writeable page that will later become executable. If the JIT compiler is written carefully – and the mentioned runtimes are incredibly robust and well-tested – such exploitable vulnerabilities will be quite rare. There is definitely a tradeoff between security and performance. However, a security-conscious JIT compiler will not lead to a large loss of security. JIT can however lead to a large gain of performance. In my experience, interpreters are often 10× to 100× slower than native code, but this is highly dependent on the use case and on the granularity of the interpreter. It is correct that JIT compilation makes static analysis on the level of machine code less useful. This might be unacceptable in some settings, for example in a certain app store that wants to review all the code. However, static analysis is inherently limited and often not suitable to provide strong security guarantees. Runtime checks that allowlist permissible operations and deny anything else are much more suitable to limit the behaviour of real-world programs. For example, a browser might sandbox untrusted code in a separate process in which JIT is allowed, but no interaction with the outside world except by sending messages to a supervisor process (e.g. enforcible by seccomp on Linux). Even if the sandbox runs malware, it will not be able to do anything that ordinary non-JIT code wasn't already able to do. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429092",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/381109/"
]
} |
429,196 | This doubt is about Switch Statements from Chapter 3: Functions of the book named Clean Code Here we have a function: public Money calculatePay(Employee e)
throws InvalidEmployeeType {
switch (e.type) {
case COMMISSIONED:
return calculateCommissionedPay(e);
case HOURLY:
return calculateHourlyPay(e);
case SALARIED:
return calculateSalariedPay(e);
default:
throw new InvalidEmployeeType(e.type);
}
} From my inexperienced point of view, I can see that the switch statement inside the function calculatePay is only returning things based on the object Employee . Isn't it doing the "One Thing" mentioned by Uncle Bob ? But Uncle Bob says: There are several problems with this function. First, it's large, and when new employee types are added, it will grow. Second, it very clear does more than more thing. Third, it violates the Single Responsibility Principle(SRP) because there is more than one reason for it to change. Fourth, it violates the Open Closed Principle(OCP) because it must change whenever new types are added. But possibly the worst problem with this function is that there are an unlimited number of other functions that will have the same structure. How does the switch statement do more than one thing? | Martin's concept of "do one thing" is overly ambiguous to the point I believe it does more harm than good. In the passage Martin states that a switch does one thing per case and therefore by definition does N things. This implies that any single method call is "one thing". If you follow this thinking, you will quickly realize a program will never be able to do anything! Martin has a different definition which is that a method does "one thing" when all operations are on the same abstraction level. But the cases here calculateCommissionedPay() , calculateHourlyPay() do seem to be on the same abstraction level, so this contradicts his general criticism of switch-statments as always doing N-things. That said, there are reasons to avoid this particular use of a switch. The switch check a type-field and then selects the method to call based on that. The idiomatic way to do this in object oriented programming is to use subclasses and overloading. Employee could have subclasses HourlyEmployee , SalariedEmployee etc., which override a calculatePay() method. That way you could avoid the switch altogether and just call e.calculatePay() . But if the input really is an enum value as in the example, then you need a way to get the appropriate Employee -subclass given this enum value. How do you do that? A switch of course! So you end up with code something like this: public Employee createEmployee(int employeeType)
throws InvalidEmployeeType {
switch (employeeType) {
case COMMISSIONED:
return new CommissionEmployee(e);
case HOURLY:
return new HourlyEmployee(e);
case SALARIED:
return new SalariedEmployee(e);
default:
throw new InvalidEmployeeType(e.type);
}
}
public Money calculatePay(int employeeType)
throws InvalidEmployeeType {
Employee e = createEmployee(e.type);
return e.calculatePay();
} You will notice a few things: We still have a switch which allegedly does "N-things". The switch will still have to grow when new employment types are added. We still have a Open/Closed violation, since adding a new subclass will require us to modify the switch, just as before. But if there are multiple places in the code were we switch on the employee type as in the first example, this refactoring is a clear improvement since we now only need one single switch and can use overloading in the other places. If it is not clear from the above, I reject the premise that switch statements are bad and should be avoided in general. Sometimes it is the right tool for the job. But certain uses of switch is an anti-pattern, for example using switch as a substitute for overloading, when overloading would be more appropriate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429196",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/386019/"
]
} |
429,331 | I have a database with a 1:m relationship. I have to display a list of parents to the user rapidly on a home screen at startup. The parent shows a single piece of information that is a sum of a particular child field for that parent. I don’t want the expense of making large queries to the child table (essentially doing lists of calculations on the entire child table) - in order to show the parents. I also feel it's important to show the sum to the user on the home screen. I have therefore denormalised the sum of all children for a particular parent and added it as a field within the parent table. Each time a CRUD operation is done within the child table (with an ID that matches a particular parent), I recalculate and reinsert the new value into the parent field. Is this an anti-pattern and therefore 'bad' practice? I felt I was doing the right thing by prioritising performance for the UI. | Is denormalisation for performance reasons an anti-pattern? Not of itself - if something is required, then you have to find a way to do it, and it may well be better to denormalise your data than spend $$$ on a SuperHugeDatabaseInstance to get you the speed you need. I've certainly done this in the past when we were calculating summary data for billions of data points. Is denormalisation for performance reasons before you've measured things and found out if you actually need to do it an anti-pattern? Yes, every time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/396123/"
]
} |
429,425 | Our company have a support team and a dev team. We require reproduction steps on every bug report, however sometimes the support team submit reports without and become frustrated when dev close them as "cannot reproduce / cannot fix". Support then make the argument "if I take my car to the garage, the mechanic will look at the engine for me to diagnose and fix the problem, I don't need to know how my engine works, and I shouldn't have to spell out to the mechanic that my car makes odd noises". What's a good way to explain WHY reproduction steps are important in response to this sort of challenge? Searching for the topic finds lots of links with information on HOW to write good reproduction steps, but I cannot find any links that answer WHY they're important from the point of view of someone who is not a developer. Edit: similar questions / sites / blogs the advise seems to amount "Mark it as cannot reproduce and they'll have to fix their work and resubmit the bug. Eventually they'll learn to do it right the first time", but I'd much prefer a constructive discussion on why I'm saying I can't fix it so that there's genuine understanding of what sorts of things the support team should be doing rather than completing a template by rote and being upset when their work is rejected. | To follow on from the car analogy, I've used the following in other contexts: Say you took your car into the mechanic and said the battery keeps dying. He runs thorough tests on the battery and the electrical system and finds nothing wrong. The mechanic asks you how to reproduce the problem and you get annoyed at the question and say it just happens randomly for no reason. The mechanic digs deeper and eventually discovers you've been frequently leaving your headlights on all night, which as a brand new car owner, you didn't realize would drain the battery. Now you might say that the car should prevent the battery from dying if you leave the lights on. That's a reasonable claim, and many modern cars do just that. However, it was still impossible to make the diagnosis without knowing the steps to reproduce the problem. And at least in this case, they mentioned the battery. Some bug reports are like, "my car is broken sometimes but it's fine now" and they don't know why you want more detail. Also, the best mechanics will have the conversation first and not even charge you. It's not a perfect analogy, because in a computer application, "leaving the lights on" might not have been a user error, but an unintended side effect of another feature, or subtle interaction between two features. You also can add preventative measures and logging even if you don't precisely know what the cause is. And I would consider a mechanic very poor if they just returned your car without at least having a discussion about possible root causes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429425",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/396309/"
]
} |
429,601 | Having worked in complex solutions that had Unit Tests and Integration Test in the CI/CD pipeline, I recall having a tough time with tests that failed randomly (either due to random values being injected or because of the async nature of the process being tested - that from time to time resulted in some weird racing condition). Anyway, having this random behavior in the CI pipeline was not a good experience, we could never say for sure the change a developer committed was really causing the build issue. I was recently introduced to AutoFixture, which helps in the creation of tests, by randomly generating values - surprisingly I was the only one who did not feel it was a great idea to introduce it in all tests of our CI pipeline. I mean, I understand fuzz testing, monkey testing, etc but I believe this should be done out of the CI/CD pipeline - which is the place I want to ensure my business requirements are being met by having sturdy, solid and strict to the point tests. Non linear behavior tests like this (and load testing, black box, penetration, etc) should be done outside of the build pipeline - or at least should not be directly linked to code changes. If these side tests ever find a behavior that is not expected, a fix should be created and a new concrete and repeatable test case should be added to avoid going back to the previous state. Am I missing something? | Yes, I agree that randomness shouldn't be part of a testing suite. What you want is to mock any real randomness, to create deterministic tests. Even if you genuinely need bulk random data, more than you can be bothered generating by hand, you should generate it randomly once, and then use that (now set in stone) data as the "random" input for your tests. The source of the data may have been random but because the same data is reused for each run, the test is deterministic. However, what I said so far applies to running the tests you knowingly wrote and want to run. Fuzz testing, and therefore Autofixture, has a separate value: bug discovery . Here, randomization is actually desirable because it can help you find edge cases that you hadn't even anticipated. This does not replace deterministic testing, it adds an additional layer to your test suite. Some bugs are discovered through serendipity rather than by intentional design. Autofixture can help cycle through all possible values for a given input, in order to find edge cases that you likely wouldn't have stumbled on with limited hand-crafted simple test data. If and when fuzz tests discover a bug, you should use that as the inspiration to write a new deterministic test to now account for this new edge case. In short, think of fuzz tests as a dedicated QA engineer who comes up with the craziest inputs to stress test your code, instead of just testing with expected or sensible data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429601",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/389668/"
]
} |
429,855 | I usually write my code in a test driven style. I write tests as specifications and then my code. It's great and useful. I always try to ignore implementation when testing and only test behaviour. I don't care how it gets done, just that it got done. I find this especially easy for functional programming. Now here is the problem I have found. I have an app that is written in a functional style. All of the unit tests are nice, clean, behavioural tests. I only ever check output and don't do things like "did you call this function?". At some point however, I start needing "glue" functions. I'll consider these functions that don't introduce a lot of functionality and largely just call my other existing functions. Perhaps chaining a bunch together or whatever feature it may be. How do I test these glue functions? I ask because I want to avoid two main things as much as possible: I don't want to test them by simply mocking what they do and seeing if specific functions were called. They have a desired output I want, but usually this output is just a series of outputs from other functions that are already tested. I don't want to repeat myself and just "re-test" those inner functions to see if my glue function called them. Hopefully that makes sense. Here is an example (written in pseudo code): func1 (x) => x + 1;
func2 (x) => x * 2;
glue (x) => [func1(x), func2(x)]; Here would be a simple way of testing these functions. testFunc1 () => expect func1(2) == 3;
testFunc2 () => expect func1(2) == 4;
testGlue () => expect glue(2) == [3, 4]; So obviously, glue has an expected and predictable behaviour I want to model. I know that in this example these tests might be ok. So consider instead that the outputs of func1 and func2 are not simple numbers but much more complicated objects. In such a case, implementing the checks that glue output the correct objects would be tedious AND it would be totally duplicated from the individual tests of func1 and func2 . This also leads into the next issue. Instead consider: testGlue () => expect glue(2) == [func1(2), func2(2)]; This certainly seems better. But I think it is still flawed. While this means I am not repeating my test code it now instead "tests that the code you wrote is the code you wrote" (as opposed to what the behaviour is). Again, in such a small example it's not an issue so pretend that within glue a few variables are swapped around and yada yada is done so that to test it in this way would require my test to also set up the variables like such. Then we would be basically copying the code from the function to check if func1 and func2 were called with the correct variables leading to repetition and testing that "it's the code you wrote". If a larger example is needed to showcase such results just let me know and I will get one. Hopefully there is some good discussion to be had here. I anticipate someone to answer "don't use glue functions" and to that I preemptively ask, "what's the alternative method?". EDIT: So I am beginning to think that an alternate question that would also give me the answer I want is this. Consider that the output of func1 and func2 is something too big to feasibly have as a hardcoded value in the test. Maybe it's an object or something. Does writing my test of glue as: testGlue () => expect glue(2) == [func1(2), func2(2)]; No hold on, I must clarify something. Obviously the above test is absolutely stupid. It is just "the code I wrote is the code I wrote".
We MUST imagine that the function does more than this. In a real world scenario glue would do some processing of x before passing it around. The order of the array might matter. And however many other options. So maybe I'm checking that glue(2)[0] = func1(3) instead (pretend there is further processing to it). In such a case, is it still considered bad practice to use the output of a function as something to test against (even though that function is tested somewhere else)? | Boring structural code doesn’t need isolated testing. Test interesting code. That code has a behavior. Nail down the behavior you expect and not only will your code likely be correct, it’ll be easier to read. But keep that interesting code away from the boring structural code. Do that and I’ll be able to read it and trust it without a test that isolates it. Now if the boring structural code is part of a chain of integrated peripherals and behavior objects then fine, throw an integration test at it. If you’d like to test if I’m full of it, break the structural code and see how long it takes someone to find the problem. Don’t waste time solving non problems. At best you’re only amusing yourself. At worst you’re actually making it harder to refactor the code. Remember: it’s not that every function needs a test. It’s every behavior. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429855",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/359821/"
]
} |
429,860 | I want to send two different json response for the same API based on a flag. I have a final response json as below for /api/v1/student/, which is created by fetching responses from few other REST apis. {
"students": [{
"name": "string",
"courses": [{
"courseid": "string",
"coursename": "string"
}],
"studentId": "string",
"enrollments": [{
"key1": "string",
"key2": "string",
"key3": "string",
"curricular": [{
"date": "string",
"item1": "string",
"item2": "string",
"fees": "string",
"details": [{
"item1": "string",
"item2": "string"
}]
}]
}]
}]
} Consider a scenario where I have to support this response and another similar response where the "details" block will change based on a flag. Is creating two versions of api as /api/v1/student/ and /api/v2/student/ the only solution or there is other better approach(es)? A follow up question for going with v2 approach:
I am structuring the response as these model objects - Details, Curricular, Enrollment, Students ex: Students : {
Enrollment : {
Curricular: {
Details
}
}
} and now since the nested block - Details is changing I will have to repeat all model classes and mark them as V2 and also duplicate the code for mapping the fields. Does that look right? Thanks for your help! | Boring structural code doesn’t need isolated testing. Test interesting code. That code has a behavior. Nail down the behavior you expect and not only will your code likely be correct, it’ll be easier to read. But keep that interesting code away from the boring structural code. Do that and I’ll be able to read it and trust it without a test that isolates it. Now if the boring structural code is part of a chain of integrated peripherals and behavior objects then fine, throw an integration test at it. If you’d like to test if I’m full of it, break the structural code and see how long it takes someone to find the problem. Don’t waste time solving non problems. At best you’re only amusing yourself. At worst you’re actually making it harder to refactor the code. Remember: it’s not that every function needs a test. It’s every behavior. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/429860",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/397060/"
]
} |
430,396 | I am designing a AWS web service which is going to get 1000 TPS from devices(Android) and it has dependency on multiple downstream services. The usecase is to hit this service periodically from device, get a piece of data and cache it in the device memory. Since device does not require the data immediately, I designed this way Device: puts a request in queue (SQS)
Service: polls messages from SQS, process the requests and publish the result to devices via FCM The Problem is service takes at max 2 seconds to process the request and downstream services would not scale as much as service. In short, I can only process fraction of incoming requests per second (Lets say 200 requests, 20% of actual TPS). This leads to backpressure build up in the request queue. Reading through Internet, I found that general strategies to handle backpressure are make queue bounded, throttle the producer when it exceeds its size and make producer retry after some delay Increase number of consumers. (In this case, This is not an option due to downstream bottleneck) Questions: How throttling helps to solve backpressure problem? If the TPS is consistent, Wouldn't it create the same problem even when producer retry after delay? At the end some producers will exhaust retries and requests go unprocessed. Initially I wasn't aware of backpressure and was thinking storing messages in queue will aid asynchronous processing but now I am starting to feel queue is creating more problems than It helps. Is queue even relevant for this usecase ? What are the real benefits of having a queue in front of service? Appreciate any help!! | A good analogy here is to think of a dam on a river. The river corresponds to the incoming data, and the dam to your consumer. There are three possibilities at any given point: The incoming river's flow is greater than the dam's outflow The incoming river's flow is less than the dam's outflow The incoming river's flow is the same as the dam's outflow In situation 1, a lake grows behind the dam. This corresponds to your queue. In scenario 2, the lake shrinks. In scenario 3, the lake's volume doesn't change. A big part of why we have dams is to make the downstream flow more consistent. That is, when there's a heavy rain, the lake will get larger but the flow out can be limited. When there is a drought, the lake's reserves are drawn down to keep the outflow higher than it would be otherwise. So the volume of the lake is equivalent to the total inflow minus the outflow. There is a limit, however, to the amount of water the lake can hold. When the lake is full, you either need to release more water or somehow divert the incoming water. This corresponds to the iron law of queuing: the depth of the queue is the number of messages received minus the number of messages processed (or removed.) There's no magic. If you don't, on average, pull as many messages as you are putting on the queue, it will grow and eventually hit some sort of size limitation. Queues don't allow you to process more messages; they act as a buffer to help even out the flow and prevent failures when the incoming volume spikes. They also help with distributing the messages to multiple consumers efficiently. Alternately, they can be used as a 'holding area' for batch processing as noted by 'supercat' in the comments. But the law still holds: your overall processing rate must accommodate the incoming rate or your queue will grow. The upshot: to resolve this issue, you need to either send less to the queue or process them faster. There is no other solution. Backpressure is actually a good thing in a lot of scenarios. It allows the producers to know when the queue is filling up so they can react. It sounds to me that your issue is the 'downstream bottleneck'. You will never be able to process the volumes that you have coming in until you resolve that. A queue will simply delay how long it takes until you can no longer accept the incoming data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/430396",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/392343/"
]
} |
430,404 | I am working on research that analyzes dependency injection (DI) in Java projects. The more I read, the more I get confused by DI in relation to other frameworks and even software quality metrics. I have been recently reading about afferent couplings (Ca), efferent couplings (Ce), and instability (I) with the formula below proposed by Martin Fowler: I = (Ce/(Ce + Ca)) I noticed that the definition of efferent couplings is that it counts the number of classes the current class depends on. Is that essentially the same as dependency injection definition-wise, or are there more nuances to what is considered a dependency that has the DI framework as opposed to simply a class that the current class depends on? | A good analogy here is to think of a dam on a river. The river corresponds to the incoming data, and the dam to your consumer. There are three possibilities at any given point: The incoming river's flow is greater than the dam's outflow The incoming river's flow is less than the dam's outflow The incoming river's flow is the same as the dam's outflow In situation 1, a lake grows behind the dam. This corresponds to your queue. In scenario 2, the lake shrinks. In scenario 3, the lake's volume doesn't change. A big part of why we have dams is to make the downstream flow more consistent. That is, when there's a heavy rain, the lake will get larger but the flow out can be limited. When there is a drought, the lake's reserves are drawn down to keep the outflow higher than it would be otherwise. So the volume of the lake is equivalent to the total inflow minus the outflow. There is a limit, however, to the amount of water the lake can hold. When the lake is full, you either need to release more water or somehow divert the incoming water. This corresponds to the iron law of queuing: the depth of the queue is the number of messages received minus the number of messages processed (or removed.) There's no magic. If you don't, on average, pull as many messages as you are putting on the queue, it will grow and eventually hit some sort of size limitation. Queues don't allow you to process more messages; they act as a buffer to help even out the flow and prevent failures when the incoming volume spikes. They also help with distributing the messages to multiple consumers efficiently. Alternately, they can be used as a 'holding area' for batch processing as noted by 'supercat' in the comments. But the law still holds: your overall processing rate must accommodate the incoming rate or your queue will grow. The upshot: to resolve this issue, you need to either send less to the queue or process them faster. There is no other solution. Backpressure is actually a good thing in a lot of scenarios. It allows the producers to know when the queue is filling up so they can react. It sounds to me that your issue is the 'downstream bottleneck'. You will never be able to process the volumes that you have coming in until you resolve that. A queue will simply delay how long it takes until you can no longer accept the incoming data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/430404",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/398018/"
]
} |
430,582 | I can't seem to find a good enough answer, nor can I reason it out myself. Why deploy to the development environment? In most examples I've seen, automated unit tests and integration tests run as part of CI when we merge a feature into the develop branch. As far as I'm aware, there is no need to deploy when conducting these types of tests. I can see why deploying to a staging environment is necessary, but not the development environment. So what are we supposed to do in a deployed development environment that I fail to see? | You deploy to dev because you can't actually do real integration testing without actually really running the code. You can call the environment something else like Staging , but if it's the lowest non-production environment, it's just dev by another name. There are plenty of things that can work locally, but break when deployed, firewalls, missed dependencies, bad configuration, services that don't actually match contracts, and more. Having a real dev environment helps find these sort of issues, and fix them faster without compromising the staging environment that should be dedicated for user acceptance testing (UAT) . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/430582",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/398328/"
]
} |
431,102 | I was invited to review some code.
I've never done that before and barely know the project.
Thus I'm following the README step by step.
I log everything. The README says in its "Quickstart" section: docker-compose up -d This line raised "..denied..access forbidden.." which lead me to invest some reasonable time into researching it. A friend later told me to add build: . and remove image: docker.git... from the docker-compose.yml . This fixed it. I'm tempted to change the README accordingly, though I hesitate as maybe I'm simply lacking the skills my role demands. https://readmetips.github.io/#essential-tips-for-a-nice-readme | If you think of this from a business perspective: the more time developers (whether they are new or experienced) need to find things out, the more expensive it is for the client. So: The more information in the README to help people get up and running quickly, the better and cheaper for the client. In other words: By all means add information to the README to help people get it working without wasting time and money. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431102",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/400434/"
]
} |
431,240 | PEP 8 states the following about using anonymous functions (lambdas) Always use a def statement instead of an assignment statement that
binds a lambda expression directly to an identifier: # Correct: def f(x): return 2*x
# Wrong: f = lambda x: 2*x The first form means that the name of the resulting function object is
specifically f instead of the generic <lambda> . This is more
useful for tracebacks and string representations in general. The use
of the assignment statement eliminates the sole benefit a lambda
expression can offer over an explicit def statement (i.e. that it can
be embedded inside a larger expression) However, I often find myself being able to produce clearer and more readable code using lambdas with names. Consider the following small code snippet (which is part of a larger function) divisors = proper(divisors)
total, sign = 0, 1
for i in range(len(divisors)):
for perm in itertools.combinations(divisors, i + 1):
total += sign * sum_multiplies_of(lcm_of(perm), start, stop - 1)
sign = -sign
return total There is nothing wrong with the code above from a technical perspective. It does precisely what it intends to do. But what does it intend to do? Doing some digging one figures out that oh right, this is just using the inclusion-exclusion principle on the powerset of the divisors. While I could write a long comment explaining this, I prefer that my code tells me this. I might do it as follows powerset_of = lambda x: (
itertools.combinations(x, r) for r in range(start, len(x) + 1)
)
sign = lambda x: 1 if x % 2 == 0 else -1
alternating_sum = lambda xs: sum(sign(i) * sum(e) for (i, e) in enumerate(xs))
nums_divisible_by = lambda xs: sum_multiplies_of(lcm(xs), start, stop - 1)
def inclusion_exclusion_principle(nums_divisible_by, divisors):
return alternating_sum(
map(nums_divisible_by, divisor_subsets_w_same_len)
for divisor_subsets_w_same_len in powerset_of(proper(divisors))
)
return inclusion_exclusion_principle(nums_divisible_by, divisors) Where lcm_of was renamed to lcm (computes the lcm of a list, not included here). Two keypoints 1) The lambdas above will never be used elsewhere in the code 2) I can read all the lambdas and where they are used on a single screen . Contrast this with a PEP 8 compliant version using def s def powerset_of(x):
return (itertools.combinations(x, r) for r in range(start, len(x) + 1))
def sign(x):
return 1 if x % 2 == 0 else -1
def alternating_sum(x):
return (sign(i) * sum(element) for (i, element) in enumerate(x))
def nums_divisible_by(xs):
return sum_multiplies_of(lcm(xs), start, stop - 1)
def inclusion_exclusion_principle(nums_divisible_by, divisors):
return alternating_sum(
map(nums_divisible_by, divisor_subsets_w_same_len)
for divisor_subsets_w_same_len in powerset_of(proper(divisors))
)
return inclusion_exclusion_principle(nums_divisible_by, divisors) Now the last code is far from unreasonable,but it feels wrong using def for simple one-liners. In addition the code length quickly grows if one wants to stay PEP 8 compliant. Should I switch over to using def s and reserve lambdas for truly anonymous functions, or is it okay to throw in a few named lambdas to more clearly express the intent of the code? | You're sort of approaching it like a mathematician, where the purpose of writing the supporting functions is to "prove your work." Software isn't generally read that way. The goal is usually to choose good enough names that you don't have to read the helper functions. You likely know what a powerset or alternating sum is without reading the code. If you're writing a lot of code like this, those sorts of helper functions are even likely to end up grouped in a common module in a completely separate file. And yes, defining a named function feels a little verbose for a short function, but it's expected and reasonable for the language.You're not trying to minimize the overall code length. You're trying to minimize the length of code a future maintainer actually has to read. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431240",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/238738/"
]
} |
431,355 | I was looking at the C++ library <complex> , and noticed that functions such as std::conj and std::norm are free functions i.e. static functions not placed inside the std::complex class. Why is this the case? I would've thought that, from a C++ OOP design perspective, it would've made more sense to have e.g. complex<T> complex<T>::conj() and complex<T> complex<T>::norm() as methods so that I can call auto norm = z.norm() instead of auto norm = std::norm(z) . Am I missing something about how the standard library is designed which justifies why these functions are free? | Am I missing something about how the standard library is designed which justifies why these functions are free? The C++ standard library does not exclusively follow the OO design paradigm. Free functions, when combined with parameter overloading, play much nicer when you are writing templated code that should work with both class types and primitive types. For example, suppose I have a list of values (either complex, as std::complex<float> , or real, as float ) and I want to compare them on magnitude. Then I can write a comparison function like template <class T>
bool magnitude_less(const T& lhs, const T& rhs)
{
using std::abs;
return abs(lhs) < abs(rhs);
} Writing such a function would not be possible if abs for std::complex<T> would have been a member function based on OO design principles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431355",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/399798/"
]
} |
431,427 | So we work in a structure with three week sprints, and we want to give testers time to test. Currently our process is, write functionality > write unit tests > code review > deploy to testing environment. What I thought of is that when we have functional code, why are we not deploying it to give testers more time to test things, of course still have a code review, but often writing unit tests can take another day or a few if it's something complex. So the idea being, write functionality > code review > deploy > write unit tests (and fix bugs) > code review > deploy to testing environment. The only thing here being that you've got two separate code reviews which takes more time from the code reviewer. Additionally, there is the risk a developer just doesn't write the unit tests afterwards. Perhaps the answer to this question is not take in work that has the potential to run over, or get better at estimating etc. but when this situation does occur, what's the problem with using this as a backstop? | I cannot imagine you really meant to shift any kind of unit testing to the point in time after the time of a deployment. I can imagine, however, the Sprint process you really had in mind looks more like this: Loop several times per day: write functionality maybe do some manual testing write and run unit tests (not necessarily in that order, especially not when doing TDD) Deploy intermediate release to the testers Loop again Write additional automated tests fix bugs reports from the testers, or from your own tests refactor Deploy release candidate to the testers That can be a reasonable approach (I left the code review phases out for simplicity, add them where you think they work best in your team). Maybe that is what you meant, not sure, the process description in your question is a little bit terse. Note also in step 3, when those tests take a day to write, they are most probably not unit tests. Passing intermediate releases to testers can be a good idea, to get earlier feedback, but you have to make sure you know what you are doing: For example, I would not pass anything completely untested to the testers. You burden the testers with issues you could have identified way more easily and way more quickly by yourself beforehand. Even worse, there is a certain risk the code might so broken in certain areas testers cannot even start with testing the functionality they want to see, or the bugs in the code will shadow other bugs. When you hand too many half-baked features over to the testers, there is a certain risk that the testers have to invest a lot of extra work into things which will never make it into the final product. (However, sometimes it can be the right thing to stop delivering a new feature just because the testers found too many issues during intermediate tests). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431427",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/319741/"
]
} |
431,499 | As I got to know there are 256 possible combinations to get for 1 byte. If I understand it correctly, it should mean that you can display any number out of numbers 0-255 and this very number would use only 1 byte of your storage on your computer. Displaying a number out of the numbers from 256 to 65535 would however cost you 2 bytes of your storage (if I understand it the right way). However after typing in the number 65535 into a text document and looking up its file size (the "information" about the text doc) I got to see that the used storage for this document was actually "5 bytes". Does anyone know what the reason for this could be? | TL;DR The key takeaway here is that there is a world of difference between the number 65535 and a piece of text which represents the digits '6', '5', '5', '3' and '5'. It may look the same to you when rendered on a computer screen, but to a computer internally, they are completely unrelated to one another. The rest of this answer is a lengthy elaboration on what that means and how it is used in everyday computing. Data has a type. For example, this answer is a text type, known to developers as a string . A number is a different type. There are several number types, based on what it is you're storing ( int for integers, uint for positive integers, long for bigger integers, float or double for decimal values, ...). Why do data types matter? Well, it's sort of similar to why we have file extensions. Based on the file extension, the file data gets read in a different way. You can easily test this by taking an .mp3 file, changing its extension to .txt, and opening it. You'll see that your OS opens your text editor, not your media player, because it bases its choice on the file extension of the file you're trying to open. In that text editor, you'll see characters. It's not readable, but as far as your text editor is concerned, this is all valid. You can remove some characters, add some more, and save the file. If you now change the extension back to .mp3 and play it, you'll notice that the file is damaged. Maybe it's totally broken, maybe it only has a small glitch in it; this depends on what you changed and how you changed it. Cool trick : Take a Microsoft Office file (.docx or .xlsx, not the older versions), change its file extension to .zip, and open it. Lo and behold, this is a perfectly working zip archive! Since the advent of the .docx (instead of .doc) and .xlsx (instead of .xls) filetypes, Office has really been storing all of its data using ZIP archiving methods. Any file is really just a long sequence of binary digits. And how we interpret those digits is at our discretion. ASCII text encoding is one of these interpretations. It uses a character library of 256 characters, which means that it can get away with using a number (0-255) to denote a single character, and this number handily fits into the space of a single byte. So when a text file is opened, the text editor takes the file data, takes the data one byte at a time, interprets that byte as a number (which inherently ranges from 0-255, and then shows the character for that number. As an aside, I could write my own file data interpretation. For example, I could use a file's data to generate an image which is 1px tall and is as many pixels wide as the file has bits, and generate a black pixel for every 1 and a red pixel for every 0 in the file data. It's a bit of an odd system, but perfectly viable if I choose to make an application that generates these kinds of images. Or, alternatively, I could store my students' pass/fail on an exam using a simple stream of bits where 1 is pass and 0 is fail, and I know the order of my students based on some other data I've stored somewhere else (not in this file). This is a highly efficient storage mechanism in terms of data size, but it is also very inflexible in terms of storing additional information (e.g. absentees). Back to ASCII. You can do this exercise by hand using an ASCII table to look up character values. If your file's first byte is 0100 0001 , which is 65 in decimal, the text editor will give you an A character. Following this, you can figure out what 5 letter word I wrote here in ASCII: 0100 0001 0101 0000 0101 0000 0100 1100 0100 0101 . If you take each byte, convert it to a number, and look up what character that is, you'll see that the 5 characters are APPLE . One very important thing to notice in that linked ASCII table is that it doesn't just contain letters (a-z and A-Z), it also contains all digits (0-9). You'll notice that this answer focuses on text, not numbers. There's a clue in your question: However after typing in the number 65535 into a text document and looking up its file size (the "information" about the text doc) I got to see that the used storage for this document was actually "5 bytes" The data you entered was parsed as text, not as a number. You did not store the numerical value of 65535 , you stored the individual characters (i.e. text) of 6 , 5 , 5 , 3 , and 5 . Using the ASCII table, this means that your file contains 0011 0110 0011 0101 0011 0101 0011 0011 0011 0101 , which is the same as the APPLE example, but this time using the characters 65535 . Here's an online converter so you can play around with this (you can press "Swap" to switch converting to or from binary). In programming terms, your variable type matters, because it indicates to the compiler how your data should be stored, and how it should be handled. int myNumber = 65535;
string myString = "65535"; In the first case, it will store your value numerically. 65535 (decimal) is 1111 1111 1111 1111 in binary. However, int is always 4 bytes, so it gets stored as 0000 0000 0000 0000 1111 1111 1111 1111 . Total size: 4 bytes. In the second case, it will store each individual text character, which we already know from before is 0011 0110 0011 0101 0011 0101 0011 0011 0011 0101 . Total size: 5 bytes. Now here's an interesting curve ball: int myNumberPlusOne = myNumber + 1;
string myStringPlusOne = myString + 1; In the first case, because you're dealing with a number, the compiler does numerical addition , and stores the sum ( 65535 + 1 = 65536 ) as a new number. myNumberPlusOne is therefore stored as 0000 0000 0000 0001 0000 0000 0000 0000 , which is a value one bigger than myNumber . Total size: still 4 bytes. In the second case, because you're dealing with text, the compiler appends the 1 to the text . Now, you're storing a 6-character string, 655351 , which in binary is 0011 0110 0011 0101 0011 0101 0011 0011 0011 0101 0011 0001 . Total size: 6 bytes. Here, you can see why data types matter. We store things differently, and we also treat that data differently. Why? Because it makes sense to us that + on numbers means numerical addition, whereas + on strings means string concatenation (= appending). To summarize, the key takeaway here is that there is a world of difference between the number 65535 and a piece of text which represents the digits '6', '5', '5', '3' and '5'. It may look the same to you when rendered on a computer screen, but to a computer internally, they are completely unrelated to one another. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431499",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/401171/"
]
} |
431,598 | Today, I updated ZBateson\MailMimeParser the PHP e-mail parser library from 1.x to 2.x. Soon enough, my PHP error log started filling up with errors. Noting where it happened, I found out that it had to do with their ::parse(...) function: https://mail-mime-parser.org/upgrade-2.0 An additional parameter needs to be passed to Message::from() and MailMimeParser::parse() specifying whether the passed resource should be ‘attached’ and closed when the returned IMessage object is destroyed, or kept open and closed manually after the message is parsed and the returned IMessage destroyed. That is, instead of picking one of those new "modes" by default, the author(s) simply chose to break all existing code. Frankly, even after re-reading that page multiple times, I have no clue what the new parameter actually does. I have set it to true just to make the errors stop happening, but I'm worried that this is somehow not the right choice. My point, and question, is: Why do library developers knowingly break existing code like this? Why not at least have it default to either true or false , whichever is the most reasonable? Before you tell me that I should have read the upgrade instructions before updating, I sometimes do, but when your life consists of nothing but dealing with constant updates of all kinds of software, you eventually get numb to all the changes and stop spending the time and effort to do so. Is it really reasonable that updating a library (in particular) should break existing code? And this is not some sort of edge-case of the library, either. It's literally the #1 reason for it to exist in the first place, sure to be used by every single user: parsing an e-mail blob! | A major version upgrade literally means they intend to break things. You shouldn't upgrade to a new major version unless you're prepared to deal with it. Most build systems have a way to specify you're okay with automatic upgrades to minor versions, but not to major versions. APIs break for a number of reasons. In this case, I'm guessing it's because what they would want to set the default to would be surprising to some users, either because it's not a typical convention for the language, or because of history with this library. This way, instead of half the users suddenly getting a difficult to explain "file is closed" error whose reason is difficult to find in the release notes, everyone gets a "missing parameter" error that they can easily look up the purpose of. Remember, not everyone uses the library the same way as you. When you have a diverse user base, you have to make compromises in the API to accommodate everyone. A change that seems unnecessary to you might be just what another user has been waiting for. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431598",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/401378/"
]
} |
431,601 | I have various types of financial securities. Each one of these securities shares a common set of methods. For instance, they all pay some amount of cash interest between two dates. Each security has a different way to calculate that amount, however, that can require totally different inputs. Think of this as trying to calculate the area of different shapes, but more security types and hence ways to calculate cash_interest will be added over time. # fixed rate security
def cash_interest(self, start_date, end_date, fixed_rate) -> float
# floating rate security
def cash_interest(self, start_date, end_date, index_px_history, index_margin) -> float
# was it sunny outside security
def cash_interest(self, start_date, end_date, was_it_sunny_outside) -> float How should I organize these securities with similar outputs? I'm using python but am looking for language-agnostic answers as well. I'm drawn to an OOP approach of subclasses given the common outputs. I want to be able to iterate over all the different securities and call the function cash_interest but am confused how I would know when to provide which input if I go the overloading approach. If I go the approach of having one big cash_interest function with tons of optional parameters, I make it harder for other users to add their own custom securities since the main cash_interest function would need to be edited every time someone wanted to add a new security. | A major version upgrade literally means they intend to break things. You shouldn't upgrade to a new major version unless you're prepared to deal with it. Most build systems have a way to specify you're okay with automatic upgrades to minor versions, but not to major versions. APIs break for a number of reasons. In this case, I'm guessing it's because what they would want to set the default to would be surprising to some users, either because it's not a typical convention for the language, or because of history with this library. This way, instead of half the users suddenly getting a difficult to explain "file is closed" error whose reason is difficult to find in the release notes, everyone gets a "missing parameter" error that they can easily look up the purpose of. Remember, not everyone uses the library the same way as you. When you have a diverse user base, you have to make compromises in the API to accommodate everyone. A change that seems unnecessary to you might be just what another user has been waiting for. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431601",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/330464/"
]
} |
431,846 | In the announcement of Linux 5.15-rc1 Linus said : At only just over 10k non-merge commits, this is in fact the smallest rc1 we have had in the 5.x series. What is a non-merge commit? What's the opposite of a non-merge commit? | If you're working in a feature branch, that branch might have a significant lifespan. By that I mean that the originating branch ( master or develop usually) receives many commits while the feature branch is off on its own. This could be because the feature is unexpectedly delayed, was expected to take a long time, or if there's lot of concurrent development happening (e.g. large dev teams). For whatever reason, the situation I'm focusing on is one where a lot of things happen to master between the start and end of your specific feature branch. Eventually, the feature branch needs to merge back into master. However, if master has significantly changed since then, there may be conflicts because the feature was written against an outdated codebase which is no longer the same as today's master code. It is generally considered to be better to have multiple small issues instead of one big one at the end. To avoid having such a big break event, it is advised to regularly merge master back into your feature branch. This means that you introduce the new changes that have been made to master in your feature branch, so that you can quickly spot and fix any conflicts that may arise. Those merges from master into the feature branch, when they are not rebase actions, generate a commit message to the feature branch history. Something along the lines of: 2021-01-01 Started feature branch
2021-01-01 Added FOO
2021-01-02 Merged from master
2021-01-02 Refactored FOO
2021-01-02 Added BAR
2021-01-03 Merged from master
2021-01-04 Merged from master
2021-01-05 Added BAZ But really, those merge commits don't indicate how much work has happened on this branch. They only indicate how much changes have occurred on other branches during the lifespan of this branch. In reality, there have been 4 commits on this feature branch that are indicative of work that has happened on this feature. Therefore, if you're trying to express the amount of work that went into a specific branch, you should measure the commits except merge commits, since they are not a measure of development on the actual branch. Hence why Linus is expressing the amount of non-merge commits to indicate the relative size of rc1. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431846",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/380649/"
]
} |
431,967 | Considering this question and the most upvoted answer , and his specific example of public static final int THREE = 3; might it make sense to allow this sort of usage if we added units to the declaration? I mean like this: public static final int THREE_MINUTES = 3; or maybe this: public static final int THREE_GALLONS = 3; I'm thinking in terms of stuff I'd flag in a code review. I would definitely flag final int THREE = 3 but does it seem like a generally reasonable exception to allow numbers that add unit of measure? | The issue is not only with the lack of units, but the fact that it is not clear what three of those units represent. Do you only have three minutes to complete a task? Then the constant might be better named as MAXIMUM_TASK_DURATION . Is three gallons the capacity of some container? Then we could use the name CONTAINER_CAPACITY . Your original names only add precision to what the value is, but not how it is intended to be used, which is the crux of the issue. The lack of units in those suggested constants might also be an issue, albeit a separate one. One possibility would be indeed to add the units in the constant name. Another would be to avoid primitive obsession and use a more appropriate type, such as Duration (which is already provided by the JDK), or Volume (which could be a value object created specifically for the domain of your application). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/431967",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7912/"
]
} |
432,458 | I recently asked a question on CodeReview SE where using a binary search was recommended, and the author of the answer proved that it was faster than my implementation. During my research on the topic, I came across a table that shows the complexities of a binary search: These are the complexities of a binary search − Worst-case Best-case Average Worst-case space complexity O(log n) O(1) O(log n) O(1) I think I'm misunderstanding the information this table is meant to relay. I'm perceiving it as: The average performance of a binary search is the worst-case performance. However, if this were true, wouldn't a binary search be avoided in many cases? I'm new to this level of optimization so my impediment could just be some underlying prerequisite that I don't have knowledge of yet. Edit : For clarity in hindsight, the question I sought answers to is below this edit. The one above it is strictly a questioning of my reasoning during my research phase that I included to demonstrate my lack of understanding. My apologies for any confusion on the matter. What is meant by " the complexities of a binary search ", what are those complexities, and what is this table actually telling me relative to those complexities? | The full term for this is "Time Complexity" and you'll want to use that if you are searching. This is a core concept in computer science. I am not going to attempt to explain it in detail but here's a high-level explanation: The idea of time complexity is to understand how the performance of an algorithm relates to its input size(s). A time complexity of O(1) means 'constant time'. In other words, the performance of the algorithm doesn't change with the size of the input. I think in this case, the best case is when the item you are looking for happens to be the middle item (the first one examined). Best case performance tends to be the least interesting or useful of these measures in my experience - with the exception of when the best case is really bad. O(log n) means that the time it takes is related to a logarithm of the input. In this case the base of that is 2. So if the list has 16 elements, it will take on average about 4 checks to find an item. If the list has 30,000 elements, on average you should expect it take something like 15 checks. The fact that the worst case is the same as the average means that you'll never need more than that many log₂(n) checks to find something. It could be better but it will never be worse. That's actually a good thing. Some algorithms have good average times but really bad worst case scenarios. That makes it hard to predict performance in practice. It's like driving on a freeway; it might be the fastest route most of the time but if there's a bad wreck, you could be stuck for hours. If you want to make sure you get somewhere on time, you might be better off with a slower (on average) route that is more predictable. Addendum: I failed to address the 'space complexity' entry in the table. This is a very similar concept to 'time complexity' (or more properly 'computational complexity') except that it refers to the amount of memory or storage that is required to execute the algorithm. In this case, it's constant which essentially means it doesn't take anymore memory (or other storage) to search a list of a billion items than it does to search one of 16 items. The lists themselves obviously do, but the memory allocated to the execution of the search algorithm remains the same. In other algorithms, however, it's not uncommon to trade space for time e.g. an index on a database table requires extra storage but can greatly improve the performance of lookups. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/432458",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/319749/"
]
} |
432,838 | I am reading about the SOLID principles, but it seems like the Liskov-Substitution Principle primarily refers to programs that use inheritance. From my understanding people are shifting more towards composition over inheritance. If that is the case, does the "L" in SOLID still apply? If so what would be an example of its use if one almost never relies on inheritance? | It's not about inheritance, it's about substitutability of types . In languages that support duck typing (JavaScript, Python, compile-time polymorphism of C++ templates, etc...), or structural typing (TypeScript, Go, etc...), the two types don't have to form an inheritance relationship at all. E.g., this JavaScript code will work just fine, even if there's no inheritance in sight: var cat = {
getSpecies: () => 'Cat',
vocalize: () => 'Meow!'
}
var dog = {
getSpecies: () => 'Dog',
vocalize: () => 'Woof!'
}
var growls = ['Growl!', 'Grrr!', 'Rumble-rumble...', '(Blank stare)'];
var growlIndex = 0;
var growler = {
getSpecies: () => 'Growler',
vocalize: () => {
var index = growlIndex;
growlIndex++; if (growlIndex === growls.length) growlIndex = 0;
return growls[index];
}
}
// There's, effectively, an implicit abstract type Animal
function GreetAnimal(animal) {
console.log('Human: Hi, there!');
console.log(`${animal.getSpecies()}: ${animal.vocalize()}`);
}
GreetAnimal(cat);
GreetAnimal(dog);
GreetAnimal(growler); Often, when using composition, you'll allow for the ability to plug in different implementations of something into the composite; in class-based statically typed languages, the composite would have a reference to a subobject of an abstract type/interface, so you'll have inheritance in there. And in duck-typed languages, you have an implicit type, even if there's no inheritance. Also, it doesn't even have to be about objects; it can apply to functions too. For example, suppose you have a tree structure with a method that visits every node in the tree, and allows you to pass in a function (or a lambda) of the form void MyFunc(Node n) (or (node) => { ... } ) that allows you to access each node; the documentation says that this function must not modify the tree structure (but may modify the contents of the node itself), as the code in the Tree class relies on the tree itself not being modified. The signature of the function is a kind of an abstract type, and the requirement in the documentation is a specification of the abstract behavior required of that type, and all its implementers. A concrete function that you pass in is a concrete implementation of this type. If you pass in a function that modifies the tree structure, you've just violated LSP. Now, in this case, it would have been better if the design was such that you cannot easily break LSP in this way - e.g. instead of passing the node itself (thus allowing the caller to modify the child pointers), only pass the contents. But this is not always possible, and sometimes, the requirements on the behavior of the type are not easily designed away. Suppose you need to write some kind of algorithm that processes a bunch of objects, and that requires the user to provide a way to decide the ordering of these objects. You can use the standard int compare(a, b) approach, where the negative value indicates that a comes before b , a zero indicates they are the same in terms of ordering, and a positive value indicates that b should come before a . You also require that the ordering functions makes sense as an ordering function: if a < b and b < c , then the function should also say that a < c (the ordering "transfers" as you'd expect; it doesn't suddenly say that a == c , there's no rock-paper-scissors kind of thing where c < a , etc.). So if you supply a comparison function that, say, behaves like this: compare(a, b); // returns -1 (a is before b)
compare(b, a); // returns -1 (b is before a) you break Liskov with respect to your algorithm. You can't easily design that away - it's the responsibility of the users of your library to provide a sensible implementation of the int compare(a, b) type, or take on the risk of not doing so (the risk being, your function could crash, produce nonsensical result, or it might work, but then when you publish a new version where you change the internals, their code will break even though it's a non-breaking change, and it's on them, because they didn't adhere to the contract). In some other context (e.g. when implementing rock-paper-scissors), the behavior expected from int compare(a, b) might be specified differently (the type, in the LSP sense, is not entirely defined by just the signature (or by an interface)). So, the same implementation may break LSP in one context, and be valid in another. I guess another way to look at it is that what's a compiler considers to be a type, is not quite the same as what you, the developer (either as an author or as a user of some piece of code) consider to be a type; typically, types, in the sense relevant to us developers, cannot be entirely expressed in the language itself - and we know this intuitively; everyone knows that if a code compiles, it doesn't necessarily mean that it works as intended. In a sense, LSP (that is, Liskov & Wing 1994 paper) captures that in a more precise way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/432838",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/305151/"
]
} |
433,042 | Sometimes in apps I look into the Resources and find files for, for example, a 256x256 version, a 128x128 version, a 64x64 version, AND a 32x32 version, of the same icon. When I see simple geometric icons like circles I already wonder why they do not just use SVG, but on top of that, what is the purpose of storing an original icon in addition to progressively lower quality versions of the same icon? Storing the smaller icons does not even save space because the full size icons are being stored anyway. For example in Roblox Studio the @2x and @3x images are just higher resolution versions of basically the same icon: Why do that? | Image scaling isn't necessarily free Modern processors are fast, but they're not magic, and everything that needs calculating on the fly uses up resources that could be better spent elsewhere. Similarly, if you load a large image into RAM (not to be confused with storage ), or transmit it over a network, and then rescale it to one which only needs 1MB, you've wasted that 4MB of memory or bandwidth. In the case of icons, this is generally negligible, but it could add up for a large number of images. More importantly, image scaling is not always easy Scaling a bitmap down means throwing away part of the image data, but trying to keep the "look" of the image. For some images, that's trivial - throw away every other pixel in a large block of colour, and you have an identical block of colour, but smaller. For some, it requires a smarter algorithm - a curved line will look jagged if you just drop pixels, but blurred if you over-use anti-aliasing . For some images, it is better to simply redraw from scratch. To produce an icon at a very low resolution, an artist might take the original design and drop irrelevant details, straighten up curves and slanted lines, and so on. Notably, even storing a vector image such as an SVG would not give as good a result in these cases. For a nice example of this, here are the 8 sizes of icons embedded in the LibreOffice Writer executable on Windows, which have clearly been hand-drawn at each size (form left to right, 16x16, 22x22, 24x24, 32x32, 48x48, 64x64, 128x128, and 256x256): Here is what each would look like if simply scaled down to the smallest size (16x16): And here they are all scaled to 64x64 (the actual size of the sixth one along): Special cases require effort As you pointed out, some applications have very simple icons which could be made to look fine with automatic scaling. However, if some icons are stored at multiple resolutions, it's much easier to code the application or framework assuming that all icons will be stored at those resolutions. It's also easier to tell designers to always give you a series of icon files at the appropriate sizes, and name them in a standard way, leaving it up to them how much time to spend optimising each one. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433042",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/404182/"
]
} |
433,069 | In python I often see functions with a lot of arguments. For example: def translate(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p):
// some code
return(x, y, z) I like this pattern in some cases. I think it makes a ton of sense in library type situations where the variables are optional keyword arguments (e.g. pd.DataFrame declarations). However, I also see instances (locally developed custom functions) where where all of the inputs are essentially mandatory. In these cases the function is typically called elsewhere in the program and will be formatted something along the lines of: x, y, z = translate(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p) I dislike several things about this: Readability - the function call ends up being very long, which makes it hard to read/digest and can sometimes obstruct the readability of the script it is sitting in Re-use of variable names - the local variable a in translate() is not the same entity as the variable a in the script Jumbled variables - it is very easy to accidentally write translate(b, a, c, d, e, f, g, h, i, j, k, l, m, n, o, p) because most variable
names don't have an inherent/obvious order. This can be avoided by
specifying the keywords but this makes the function call even longer.
Imagine translate(a=a, b=b, c=c, ...) with real variable names. To resolve/avoid the above problem I started to use dictionaries to pass large numbers of variables between functions. Then I noticed that I could also use the dictionaries to return variables.... Using the above example: def translate(dict_of_values):
// some code
dict_of_values['x'] = something
dict_of_values['y'] = something_else
dict_of_values['z'] = something_other
// and if I want to call the function I state:
some_dict['a'] = 1 // also populate values for b, c, d, ..., p
translate(some_dict) My question is as follows: Does this coding pattern have a name? Will other programmers easily understand the format? What problems am I introducing that will bite me in the future? Is there a better alternative, assuming that I can't avoid functions that have a large number of mandatory variables? I understand that I could be using **kwargs by writing defining the function as translate(**dict_of_values): and then calling translate(**some_dict) but I can't see any particular advantage to doing so. If anything, it would make the code slightly more verbose as I'd have to add return and assignment statements to achieve the same end point. | Does this coding pattern have a name? This is a refactoring called "Introduce Parameter Object" . A dictionary is used here as a "poor man's DTO ". Note there are other, less error prone means to introduce DTOs in Python, like dataclasses, named tuples or typed dicts Will other programmers easily understand the format? Surely not if you call those DTO just dict_of_values or the keys just x,y, and z. But same holds in your orginal function's signature when the parameters are just called a,b,c,d. My point is, not the fact of using a dictionary as a DTO makes the differences between "easy" or "hard to understand", but the naming, commenting and the separation into easy-to-grasp units. What problems am I introducing that will bite me in the future? When introducing DTOs, make sure their names give a clear, readable indication of what those DTOs represent. Otherwise your growing code will end up in an unreadable mess. Using an untyped dictionary as a DTO has the problem that you won't get immediately an error if you have a typo, bugs will manifest themselves only later during runtime. Is there a better alternative, assuming that I can't avoid functions that have a large number of mandatory variables? See #1. If those alternatives are really "better" may depend on the specific context, there are always some trade-offs involved. Some of those alternatives require more code, some of them don't work with older Python versions, some of them require additional dependencies. You have to decide by yourself which variant gives you the best cost/effort relationship. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433069",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/404233/"
]
} |
433,115 | When coding resharper recommends that if you're to discard or ignore the return of a method, that you use this syntax: _ = TheMethodICouldCareLessAboutTheReturnValue(); I know you could just call it without assignment to _ just the same, so why does the _ as an assignment matter? | It matters for two reasons. One is conventional, the other technical. The conventional reason is that _ conveys active disinterest in the returned value. Sure, you could write var dontcare instead, but that's just a different arbitrary value. But as you pointed out, you could also omit the assignment, so it's not just about choosing the shortest name possible. This brings us to the technical reason. There are cases where you have to declare a parameter and you cannot simply omit it. This applies to out parameters in method calls, and named tuples when you don't care about all of the tuple's members. // Out params
if (DateTime.TryParse(dateString, out _))
Console.WriteLine("dateString can be parsed as a DateTime");
// Tuples
var (minimum, _) = FindMinMax(myData);
Console.WriteLine($"The minimum value is {minimum}"); There may be other use cases, these are the two I can think of because I encountered them before. Is this necessary ? Well, it's an easy way to suppress warnings about unused variables. Not all developers care about warnings, but those who do would be pestered by these useless warnings for cases where they are knowingly not using a variable that the compiler forced them to declare anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433115",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/404306/"
]
} |
433,259 | From what books I read on linux system programming, it seems like signals were the primary way to communicate events between processes. They were the gateway into many interesting functionalities, like timers, interrupting sleeping threads, IO events and so forth. When reading books on multithreading and latency control, I do not remember seeing signals. I believe signals have higher privileges due to being able to interrupt sleeping thread, which I believe is a good thing when it sleeps for too long (I know there are also semaphores and condition variables, but signals seem to be the most universal way to do that) aside from other functionality provided by the kernel. So my question is: why did usage of signals disappear? Is it because higher level, inside-VM languages took over? Or were there any innovations that made them obsolete? I've never seen stuff like system timers in C++ libraries before, so I'm doubtful that anything better was invented. | it seems like signals were the primary way to communicate between processes I'd disagree with this. Signals are/were the primary way for a "supervisor" process to control a "supervised" project - e.g. init wanting to stop a process at system shutdown, a shell wanting to notify a subprocess of something. They were never really the primary way for cooperating processes to communicate - a shell pipeline communicates via pipes, not via signals. In particular here, signals convey essentially no information beyond their type - if you actually wanted to communicate a non-trivial amount of information, you needed a separate method to actually pass that information, whether that be a pipe, shared memory, a temporary file or whatever else. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433259",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/251289/"
]
} |
433,387 | I've recenlty been greeted by CS8603 - Possible null reference return , which indicates that my code could possibly return null . It's a simple function that looks up an entity in a database by id - if it exists, it returns the entity. If not, it returns null . public TEntity Get(Guid id)
{
// Returns a TEntity on find, null on a miss
return _entities.Find(id);
} My gut feeling says that this makes sense. If the client says "Give me the user with ID 82739879", then giving them "nothing" makes intuitive sense, if no user with said ID exists". However, the compiler warning caused me to re-think this approach. Should I do something else, other than returning null, if the user ID could not be found? Returning an exception is a possibility, but I don't consider a user not existing to be an "exceptional" state. Is it "wrong" to return null in this case? Am I overthinking this? | Why are you getting the warning? You have enabled the nullable reference types (NRT) feature of C#. This requires you to explicitly specify when a null may be returned. So change the signature to: public TEntity? Get(Guid id)
{
// Returns a TEntity on find, null on a miss
return _entities.Find(id);
} And the warning will go away. What is the use of NRTs? Other recent changes - specifically around pattern matching - then tie in really nicely with NRT's. In the past, the way to implement the "try get pattern" in C# was to use: public bool TryGet(Guid id, out TEntity entity) Functional languages offer a better approach to this: the maybe (or option) type, which is a discriminated union (DU) of some value and none . Whilst C# doesn't yet support DU's, NRT's effectively provide that maybe type (or a poor man's equivalent) as TEntity? is functionally equivalent to Maybe<TEntity> : if (Get(someId) is TEntity entity)
{
// do something with entity as it's guaranteed not null here
}
else
{
// handle the fact that no value was returned
} Whilst you can use this type of pattern matching without using NRTs, the latter assists other developers as it makes clear that the method will return null to indicate no value. Change the name to TryGet and C# now provides that functional style try get pattern: public TEntity? TryGet(Guid id) => _entities.Find(id); And with the new match expression, we can avoid out parameters, mutating values etc and have a truly functional way of trying to get an entity and creating one if it doesn't exist: var entity = TryGet(someId) switch {
TEntity e => e,
_ => Create(someId)
}; But is it wrong to return null? There has been vast amounts written on why null was the billion dollar mistake . As a very crude rule of thumb, the existence of null likely indicates a bug. But it's only a crude rule of thumb as there are legitimate use-cases for null in the absence of Maybe<T> . NRT's bridge that gap: they provide a relatively safe way of using null to indicate no value. So I'd suggest - for those using newer versions of C# - there is nothing wrong with returning null as long as you enable the NRT feature and you stay on top of those CS8603 warnings. Enable "treat warnings as errors" and you definitely will stay on top of them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433387",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/227874/"
]
} |
433,503 | I have always wondered whether public , protected , and private has security implications post compilation. Hypothetically: class Foo
{
public:
int m_Foo; // Completely vulnerable and dangerous
protected:
int m_Bar; // Possible attack vector from subclasses
private:
int m_FooBar; // Totally secure
}; public members [by terminology alone] suggest that they are more vulnerable than private members, but I can not imagine how this could be taken advantage of post compilation in something like a proprietary program. Questions: Pre-Compilation, are members unneccessarily left in public , a security concern? Why or why not? Post-Compilation, are members unneccessarily left in public , a security concern? Why or why not? Are there any historical or hypothetical examples of attacks which used public designated members as an attack vector? | Access modifiers like public/private/protected are not intended as a security boundary. And since C++ is not a memory-safe language, this cannot be a security boundary . The laziest “attack” to access private members would be to reinterpret-cast the value to a struct with equivalent layout: struct PublicFoo {
int m_Foo;
int m_Bar;
int m_FooBar;
};
PublicFoo* attack(Foo* supposedly_secure) {
return reinterpret_cast<PublicFoo*>(supposedly_secure);
} In some cases, I have manually calculated offsets in order to access fields for objects that were created by a different library, e.g. to pick out the m_Bar field: int attack_bar(Foo const* supposedly_secure) {
const auto start_of_the_object = reinterpret_cast<char*>(supposedly_secure);
const auto offset = sizeof(int); // skip over m_Foo field
const auto location = start_of_the_object + offset;
return *reinterpret_cast<int*>(location);
} So what are access modifiers for? They just help you to manage the data flows in your code. While you can circumvent access modifiers, you typically don't try to sabotage yourself. So if the field m_Foo must guarantee certain invariants, you want all modifications to that value to go through a method of your class. If you declare it private, then attempts to directly access this field will generate a helpful compiler error. This encapsulation helps you build more robust systems, which is especially helpful for larger projects or libraries. In other languages like Java, access modifiers can sometimes serve as a security boundary. But that only works because Java is a memory-safe language with an explicit security model, so tricks like reinterpret-casting do not work (and reflection can be prevented). But in general, you should not trust language constructs to guarantee security. You would want real sandboxing technology for security boundaries, e.g. Linux Containers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433503",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/136084/"
]
} |
433,640 | I'm having a hard time wrapping my head around the use of async/await and regular sync function calls in JavaScript. Let's say I have two functions: Function 1 : async function doSomething() {
const result = await doExpensiveOperation()
const result2 = await doAnotherExpensiveOperation()
return { result, result2 }
} Function 2 : function doSomething() {
const result = doExpensiveOperation()
const result2 = doAnotherExpensiveOperation()
return { result, result2 }
} Based on my understanding, these two functions seem equivalent. In Function 1, the first operation is executed and the program needs to WAIT for the results of that operation before executing the next line and then needs to WAIT for the results of that before executing the return statement. How is that any different than Function 2 which executes its statements synchronously? I think the intention is that Function 1 supposedly unblocks JavaScript's thread of execution and allows it to execute statements past the await but that doesn't seem to fulfill the definition of "await" which sounds to me like it needs to "wait" for the results of something. Please help get me unstuck on this basic concept. | You are correct that await “blocks” the current task. But there might be more than one task awaiting execution at the same time. While one task is awaiting some result, another task can run. For example, let's assume I need to doSomething(1) and doSomething(2) . Without async/await they'd execute one after another, leading to long latency.
But with async, or equivalently with callbacks, the execution can be interleaved. Here's an illustration with ASCII-art: Sequential execution – time spent waiting is wasted. ┌───┬──────┬───┐
doSomething(1) │ │ WAIT │ │
└───┴──────┴───┘
┌───┬──────┬───┐
doSomething(2) │ │ WAIT │ │
└───┴──────┴───┘
├───────────────────────────────▶ time Concurrent execution – time spent waiting can be used to execute another task. ┌───┬──────┬───┐
doSomething(1) │ │ WAIT │ │
└───┴──────┴───┘
┌───┬──────┬───┐
doSomething(2) │ │ WAIT │ │
└───┴──────┴───┘
├───────────────────────────────▶ time A task might be waiting for a variety of reasons, for example: waiting for user interaction, e.g. that the user clicks a button or grants a permission waiting for a HTTP response or other network interaction waiting for a timer to run out With Promises, every async/await code can be equivalently written with callbacks. It's just that callbacks are typically less convenient. Here's your original code: async function doSomething() {
const result = await doExpensiveOperation()
const result2 = await doAnotherExpensiveOperation()
return { result, result2 }
} And here's the code translated to use promise callbacks: function doSomething() {
return doExpensiveOperation().then(result => {
return doAnotherExpensiveOperation().then(result2 => {
return { result, result2 }
})
})
} Note that each await introduces a clear ordering. Stuff before the await must complete before the await continues. If you'd want the two expensive tasks to have the chance to execute concurrently, it would rather be written like this: async function doSomething() {
// spawn the tasks
const task1 = doExpensiveOperation()
const task2 = doAnotherExpensiveOperation()
// wait for both tasks to complete
const [result, result2] = await Promise.all([task1, task2])
return { result, result2 }
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433640",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/98425/"
]
} |
433,819 | I'm a recovering perfectionist. According to my colleagues, I am also a good software engineer, but one of the feedback I have often received is that I tend to dive too deep too soon. Suppose I start working on a new feature that requires going into the code base of another team - I often end up trying to understand how everything works together in great detail down to the flow of data and architecture of the system, sometimes even doing a tutorial on the language their repository is written in. Perhaps a driver of this behaviour is a fear of coming across as "stupid/unprepared" (perfectionism). Perhaps some of those times I would have benefited more if I had just reached out to an expert and received a high-level summary and waited to go into details at a later point during the implementation. But at other times I noticed that by going deep, I uncovered risks that we had not considered before. How do you decide how much depth of knowledge is enough? | There's no one-size-fits-all answer to this. It's highly context sensitive. One of the biggest factors is risk. You want to do just enough up-front design and planning to bring the risk to a tolerable level. However, what amount of risk is tolerable depends a lot on the stakeholders - the customers, the end users, and the development organization. The amount of acceptable risk for an internal R&D effort is different than the amount of acceptable risk before you announce new functionality as under development to the world. Acceptable risk for a supporting tool is different than acceptable risk for a device where failure can lead to injury or death. Consider risk to the business, to customers, and to users. When it comes to figuring out risk, though, the unknown unknowns are the trickiest to deal with. You probably know what you know, and you probably have a list of things that you think you need to know but don't currently know - so the known knowns and the known unknowns. However, there's also a class of things that you don't know that you need to know - the unknown unknowns. Since you don't know them, you don't know how big this space is until you start doing the work. The way I approach this is that there is always going to be some risk. Reduce the known unknowns to a point where the stakeholders are comfortable. From there, breaking the work down into small pieces, making the progress highly visible, and getting feedback as frequently as possible can turn the unknown unknowns into known unknowns and then you can plan the steps you need to take to turn them into knowns. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/134949/"
]
} |
433,919 | I am working with a coworker on a project that uses Inductive Automation software. If you don't know what it is, all you need to know is it provides a drag-and-drop GUI designer (based in java swing) and lets you write jython 2.5 or jython 2.7 (depending on the version) at different extension points of components. Press a submit button, run this jython script, like that. It's great for quickly getting something up and running (and for it's main purpose of interfacing wiht PLC's but that's not relevant here). But as a result, it allows you to shoot yourself in the foot if you aren't paying attention. As a side result to this often leads to very procedural code, no OOP almost ever. I only bring that up in the case that OOP might be an answer to the following issue I am facing. We recently had a problem of duplicate records in a database. My coworker said this was caused when people would double or triple click the button, running the jython script multiple times. My suggestion was to make a UNIQUE index on whatever it is that defines the uniqueness of the table, so that if someone presses the button 3 times, we get that first record, but the next two are discarded as they violate the constraint. This would also allow us to do a try/except , catch the error thrown back by the violated constraint and do something with that information if we wanted, like tell the user to slow down. My coworker said my solution was just masking the problem, that we should fix the script so it doesn't do the duplicate inserts. This would require making it so the button could only be pushed once and then is disabled from future button presses until the script completes, or sometimes having a statement that checks the database for existence of the record first before inserting. I've explained the issue with the second way, that if someone double pressed the button super quickly, you could have two scripts running at the same time, checking the database table, seeing no duplicate record, and then running two inserts. But he insists then that we should script out that error. I'm still relatively new to the software field, just getting into my third year while my coworker is the most senior person at the company, so we are going to be doing things his way. However, I can't shake the feeling that we are going about this wrong. Whenever I make some personal application I always use a UNIQUE constraint when appropriate to avoid duplicates, but now I am wondering if that is a mistake. Can someone more experienced share their view? Is there a right way or are there good use cases for both ways? Edit: Wow this blew up. So the main issue was that the coding/scripting part had a lot of race conditions that would have taken a while to refactor with a deadline pressing, and the issue where a person could double or triple click a button before the window was changed. It's supposed to be someone clicks the button, some logic is run and the window changes. But while the logic is running the button is still clickable, can be pressed a few times, hence duplicates. To eliminate it at a scripting level would require eliminating race conditions and how all our submit buttons work/windows are opened and putting some logic into background threads etc - which SHOULD be done no doubt, but also I feel like we SHOULD have a UNIQUE INDEX as well to prevent these double-clicks/race conditions from creating unintended duplicates. A lot of good info here. I appreciate all the input. | My suggestion was to make a UNIQUE index on whatever it is that defines the uniqueness of the table You really ought to have one already . Uniqueness of rows is a pretty fundamental property of a table. My coworker said my solution was just masking the problem, that we should fix the script so it doesn't do the duplicate inserts. You're both right. Changing the code [one place in] will help. A constraint in the database will help more and it will protect you against any other places in the code that do the same thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433919",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/391420/"
]
} |
433,977 | Assuming I have the following struct (just an example) struct string{
int len;
char*str;
} And I have the function int init_str(struct string*s, int len); which will perform s->len=len;
s->str=malloc(len*sizeof(char)) And then I have the function int do_something(struct string*s, ...) which will assume s->str is a validly malloc()'ed pointer Should I assume that s->str is a malloc()'ed pointer set by init_str()? | If you are at a security-boundary, you must assume malicious use. Check everything. If you are at a module-boundary, you might anticipate programmer error. Try to catch what you can without compromising your performance and space-use goals, allowing a higher toll if compiled for debugging. If you are writing module internals, too much checking will just get in the way. On the other hand, too little will make finding and diagnosing bugs unnecessarily hard. Above all else, remember your goals, both functional and non-functional, and exercise due diligence and common sense, as uncommon as it may be. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/433977",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/405986/"
]
} |
434,124 | My Software Engineering teacher just said: "Avoid using continue and break , always make it work without using these". Is there a problem with these instructions? I would say he didn't say something so coherent, but maybe I don't know something about it? | Some teachers oversimplify on this topic (especially when they only teach, but don't do daily real-world programming any more). Of course, I don't know if that applies to your teachers, but I would not listen to the advice against continue and break too literally. continue and break can make loops more readable or less readable, depending on how they are used. The real problem are loops with too large inner bodies and many conditions for stopping them or executing only parts. Having multiple continue and break in such a loop is only a symptom for this "disease", but working around those keywords just formally isn't the cure. If running into such a situation, one could try to avoid the mentioned keywords by using boolean flags and complex if/else blocks instead, but that will not make the code simpler. Quite the opposite - often it will become even uglier. So what is the cure? Refactor inner parts of large loops into smaller functions. These functions might return some status information which can be used to control the outer calling loop, and it can be perfectly fine to use break or continue controlled by the returned status. If the functions are still complex, decompose them to smaller functions themselves. Avoid processing too many things in one loop. Instead, organize your code to process sets of data, which can lead to a sequence of two or three simpler loops instead of one complex one. So there might be one loop which produces some intermediate array of data, then a second one which takes the output of the first loop, iterates over this data and hands the result set to a third. Often, each of these loops can be put into a function on their own, returning its result set, which is passed as a parameter into the next function. You will be astonished how much much simpler your code will get when you just apply these two guidelines rigorously. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/434124",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/406256/"
]
} |
434,294 | I'm currently reading "Code Complete" by Steve McConnell. In section 13.3 "Global data" in last paragraph "Don’t pretend you’re not using global data by putting all your data into a monster object and passing it everywhere", he said: Putting everything into one huge
object might satisfy the letter of the law by avoiding global
variables, but it’s pure overhead, producing none of the benefits of
true encapsulation. If you use global data, do it openly. Don’t try to
disguise it with obese objects Is that point applicable to tools like Redux in front-end? If it is overhead, so should we use just a globalThis or Window object to store global variables (or state) openly ? If we already have everything we need to do staff right ( openly ), why such tools as Redux, Mobx, etc. appeared? | The problem with global variables is that they are difficult to reason about. Where is that global variable being modified? How would you even know? When you pass around a context object (i.e. a bundle of state), you need some mechanism to control access. Simply passing around a god object without having intelligent access capabilities is not particularly useful. That's what McConnell means when he refers to "monster" objects. Think of Redux as "smart global variables." In a way, this is not a new idea; databases are essentially a persistent store for giant global variables, except that you get all sorts of new capabilities: ACID transactions, indexing, querying and so forth. The Redux web site describes some of the capabilities that this approach enables: Centralizing your application's state and logic enables powerful capabilities like undo/redo and state persistence. It serves as a centralized store for state that needs to be used across your entire application, with rules ensuring that the state can only be updated in a predictable fashion. The patterns and tools provided by Redux make it easier to understand when, where, why, and how the state in your application is being updated, and how your application logic will behave when those changes occur. Vue has a similar mechanism . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/434294",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/405202/"
]
} |
434,318 | I would like to display something on my desktop by tapping a button on my mobile app. For example, there is a "show cat" button on my mobile app. When I tap that button, a new window should be opened and display a cat picture on my desktop. The scenario is a bit similar to Zoom. The desktop application is idle (from the user's perspective) most of the time. When someone calls me, the Zoom application suddenly displays a UI and notifies me. To solve such a problem, I think the desktop application needs to be notified that some events have occurred like a user tapping the button on the mobile app or someone calling me. One approach I can think of is that making the desktop application listen to a port like a web server. When the user taps the button on the mobile app, it can send an HTTP request to the desktop application. I want to know if there are other approaches.
Is my approach sounds normal enough?
Is there a standard approach to such a problem? I understand that "notifying the desktop application" may not be specific enough.
Perhaps I may need to take different approaches for each OS such as Windows, macOS, or Linux.
I need to support multiple operating systems so some unified approach is preferable but not mandatory. | Be aware that to be able to send an information, the sender needs to know the target.
Normally the client knows the server. But the server has no clue about the clients until they try to connect. Therefore you first have to establish a connection to the server, THEN the server can send data to the client. In generell, there are a lot of ways to do that. Client polls on a regular basis. "Is there anything new?", few seconds later "Is there anything new?" Client makes a "long polling" request (request with an extreme high timeout). The server does not immediately answer it but "stores" the request to answer it as soon as something new happens. When the timeout of the client kicks in, he just sends the long polling request again and the server switches to the new one. Client sends his address to the server, then the server can connect. Works rarely, because the client quite often does only know the local address (local network), but not the internet address. For polling, you can use any bidirectional connection to the server (such as TCP or WebSockets (based on TCP)), then the server can push data over this connection. In your case, the app and the desktop client will very likely get dynamic IP addresses and may even be behind routers which will even more disguise their addresses. Therefore you will very likely need a third central instance (a server) where both can connect to and which will then arrange the communication between mobile app and desktop app | {
"source": [
"https://softwareengineering.stackexchange.com/questions/434318",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/384487/"
]
} |
436,215 | We have a lot of business logic in Excel files and we would like them to integrate in a web application (a Node.js web application). We don't want to rebuild the logic in a programming language. Instead, we would like to insert data into the Excel files we have, and read the calculation results from the same Excel file back. Note: "We do not use Excel as data source. We want use Excel for the actual calculations." What would be an appropriate setup for that? Is this possible? (Brainstorming ideas: virtual machine with Windows and Office installed, OneDrive , SharePoint , etc.) | Not the answer you were hoping for While this may be possible somehow, it is likely a dead-end solution. You should seriously reconsider the decision to not want to rebuild the logic in a language that is better suited for server operation. Running Excel as a backend processor would create a number of difficulties: You need to design some way of running multiple instances of Excel without interference between them, which means that you would need to copy the spreadsheet file for each instance and use that instance for only one session. A related problem is to tear down the Excel process once the related session isn't active anymore, which isn't easy to detect. You create a dependency on a runtime backend that is able to run Excel in the way your application expects. Since Excel is intended as an interactive desktop application, your use case probably isn't covered in Microsoft's future plans, and it is possible that with a newer Excel version you will be forced to either rebuild the integration, or keep your old version that does not get security updates anymore. Speaking of security, you're probably (not) aware of the security issues of using an application that isn't meant to be accessed by internet users. Web applications using SQL database backends have been riddled with SQL injection vulnerabilities, and unless the interface between your web server and the Excel-based calculation backend is either really restricted or very well-designed to be secure, you might be in for some unpleasant surprises. If you do a serious cost/risk analysis, your Excel based solution idea will probably come out way behind a rewrite (which isn't easy or cheap, but given well formulated requirements, can be done using a straightforward and reliable software development process). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436215",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/281623/"
]
} |
436,257 | I have a function with a sensitive operation: function doSomeThingSensitive() {
customers = getAllCustomers()
for each customer in customers
if customer is from Europe
Give them a 1000 euro giveaway coupon
} I want to dry run this function in order to calculate how many sensitive operations (aka the 1000-euro giveaways) will be performed. The dry run function will look something like: function doSomeThingSensitiveDryRun() {
counter = 0
customers = getAllCustomers()
for each customer in customers
if customer is from Europe
counter++
print 'we will pay ' + counter * 1000
} The problem with this approach is that the dry run function is not maintainable at all. Any slight change in the initial function will break the logic on the dry run function, which makes its maintenance very difficult. Is there a different way I can approach this? An idea is to run the initial function as a unit test and stub the sensitive operations. A different idea can be to run the initial function with a Boolean parameter, dryRun , and if it is set to true, the sensitive operation will not happen. Is there a better and maybe simpler way I can tackle this? | You're doing multiple different things in that function. Start off by splitting it into separate functions. Here's an example using a closure for the "dry run" function. function doWithCustomers(action)
customers = getAllCustomers()
for each customer in customers
if customer is from Europe
action(customer)
function sendCoupon(customer)
give coupon to customer Then you can provide the appropriate action depending on whether you're testing or running in production: function main()
doWithCustomers(sendCoupon)
function dryRun()
count=0
doWithCustomers(() -> count++)
print 'we will pay' + counter * 1000 It should be fairly trivial to convert the closure to an object if your language has the latter but lacks the former. For more functional-oriented languages, you may want to look up the fold function (also commonly known as reduce) as a generic language built-in replacement for the doWithCustomers function. Filter and map may also help. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436257",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/290636/"
]
} |
436,422 | I've been reading a book about C#. What does the word 'set' mean in the following excerpt? Pattern matching with the switch statement : Like the if statement, the switch statement supports pattern matching in C# 7.0 and later.
The case values no longer need to be literal values; they can be
patterns. Let's see an example of pattern matching with the switch
statement using a folder path. If you are using macOS, then swap the
commented statement that sets the path variable and replace my
username with your user folder name: Add the following statement to the top of the file to import types for working with input/output: using System.IO; Add statements to the end of the Main method to declare a string path to a file, open it as either a readonly or writeable stream, and
then show a message based on what type and capabilities the stream
has, as shown in the following code: // string path ="/Users/markjprice/Code/Chapter03";
string path => @"C:\Code\Chapter03";
Write("Press R for readonly or W for write: ");
ConsoleKeyInfo key = ReadKey();
WriteLine(); ... My question is about the following part: If you are using macOS, then swap the commented statement that sets the path variable and replace my username with your user folder name Would anybody explain it to me? | This question is one of English semantics, not programming, which initially urged me to vote to close this question as being off topic. However, because "set" is notoriously the word with the most numerous and widely ranging definitions in the English dictionary ( link - 430 definitions in the OED, a whopping 60,000 word long entry), it seems on topic enough to focus on what it specifically means to a programmer when referring to code. Homing in on the intention: the [..] statement that sets the path variable Very specifically, this means the following code (I assume the => was a typo on your part): string path = @"C:\Code\Chapter03"; This statement sets a value ( @"C:\Code\Chapter03" ) in the variable ( path ). That's really all there is to say here. "To set" means "to define a value". You mean that "set" is kind of synonym to 'initialize'? No. Firstly, you declare a variable: string path; Note that this does not set a value (For those who disagree: I'm ignoring default values here - the compiler warns you about uninitialized variables even if the variable's type has a default value). Then, when you first assign a value, that's what we call initialization: string path; // declaration
path = "..."; // initialization
// OR
string path = "..."; // declaration + initialization However, "setting" a value happens any time you change the value of the variable. After the first time, it's no longer called initialization; but it is still "setting" the value. All initializations are also inherently a case of setting a value. But not all cases of a value being set are also an initialization. string path;
path = "A";
path = "B";
path = "C"; The path variable was initialized once but its value was set three times. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436422",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/410037/"
]
} |
436,429 | I am looking for an advice (or an example) on how to organize logging streams in my cloud app (C#). In my app there are logging events related to the application infrastructure level and I want to write them to a file (or STDOUT). I use Serilog and I will enrich those events with custom metadata like environment parameters, kubernetes labels etc. These logs are for backend engineers (me). Besides that I want to have a detailed log of user actions which is very verbose and I want to write it to Loki (or any other log management system). These logs are for user support and analysts. We do not want to pay for a lot of meta information that we do not use. User events a correlated by session ID and there is no need to have a lot of system attributes like kubernetes labels. So the challenge is to provide specific set of custom properties and specific logging sinks (aka logger providers) for different sources of logging events. Besides that, common logging libraries like Serilog or Microsoft.Extensions.Logging have a notion of some global logging context (or a scope) which is shared by all loggers within the process. So adding custom properties to a global logging context is not an option. What do you think? | This question is one of English semantics, not programming, which initially urged me to vote to close this question as being off topic. However, because "set" is notoriously the word with the most numerous and widely ranging definitions in the English dictionary ( link - 430 definitions in the OED, a whopping 60,000 word long entry), it seems on topic enough to focus on what it specifically means to a programmer when referring to code. Homing in on the intention: the [..] statement that sets the path variable Very specifically, this means the following code (I assume the => was a typo on your part): string path = @"C:\Code\Chapter03"; This statement sets a value ( @"C:\Code\Chapter03" ) in the variable ( path ). That's really all there is to say here. "To set" means "to define a value". You mean that "set" is kind of synonym to 'initialize'? No. Firstly, you declare a variable: string path; Note that this does not set a value (For those who disagree: I'm ignoring default values here - the compiler warns you about uninitialized variables even if the variable's type has a default value). Then, when you first assign a value, that's what we call initialization: string path; // declaration
path = "..."; // initialization
// OR
string path = "..."; // declaration + initialization However, "setting" a value happens any time you change the value of the variable. After the first time, it's no longer called initialization; but it is still "setting" the value. All initializations are also inherently a case of setting a value. But not all cases of a value being set are also an initialization. string path;
path = "A";
path = "B";
path = "C"; The path variable was initialized once but its value was set three times. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436429",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/265914/"
]
} |
436,453 | Let's say I have a User class described as follows: record User (
Guid Id,
string Username,
string Password
)
{
/* some methods */
}; Let's say Username s and Passwords need at the very least 8 characters before being considered valid. Should the domain be concerned with validating that? Or the application layer? | This question is one of English semantics, not programming, which initially urged me to vote to close this question as being off topic. However, because "set" is notoriously the word with the most numerous and widely ranging definitions in the English dictionary ( link - 430 definitions in the OED, a whopping 60,000 word long entry), it seems on topic enough to focus on what it specifically means to a programmer when referring to code. Homing in on the intention: the [..] statement that sets the path variable Very specifically, this means the following code (I assume the => was a typo on your part): string path = @"C:\Code\Chapter03"; This statement sets a value ( @"C:\Code\Chapter03" ) in the variable ( path ). That's really all there is to say here. "To set" means "to define a value". You mean that "set" is kind of synonym to 'initialize'? No. Firstly, you declare a variable: string path; Note that this does not set a value (For those who disagree: I'm ignoring default values here - the compiler warns you about uninitialized variables even if the variable's type has a default value). Then, when you first assign a value, that's what we call initialization: string path; // declaration
path = "..."; // initialization
// OR
string path = "..."; // declaration + initialization However, "setting" a value happens any time you change the value of the variable. After the first time, it's no longer called initialization; but it is still "setting" the value. All initializations are also inherently a case of setting a value. But not all cases of a value being set are also an initialization. string path;
path = "A";
path = "B";
path = "C"; The path variable was initialized once but its value was set three times. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436453",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/410102/"
]
} |
436,540 | In our company we had a discussion whether formatting data in a certain locale is the responsibility of the frontend application or of the API that provides data to it. Which of the following scenarios would be best practice? Scenario 1 The API returns data (DateTimes, decimals, etc.) in their original data type (i.e. as DateTime, decimal, etc.) The frontend application is responsible for formatting the data that's been provided by the API in the required locale. Scenario 2 The API is responsible for formatting DateTimes, decimals, etc. in a specific locale and returning them as formatted strings to the frontend. The frontend displays the data (formatted strings) as they are, and doesn't need to take care about the formatting. | Approach 1 – handling formatting in the frontend – is usually the best answer, as once something has been formatted it is less suitable for further processing. If there are multiple consumers in different cultures it is more natural to handle that closer to the point of use. It also saves you having to pass the culture back up to the API, and consequently simplifies the test coverage. However, in some cases you are obliged to use the other approach (localization in the backend). For example, sorting and pagination - what constitutes "alphabetical order" is culture-dependent, which would appear is, say, the selection of sort order in an SQL query so that the cursor is in the user's culture-appropriate sort order (so that results can be incrementally returned by the cursor, instead of all results returned to the front end, then have the front end reorder all results). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436540",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/277611/"
]
} |
436,567 | I have to build a micro service for my company, the backend is an Oracle database, but the micro service must consume five (5) stored procedures that already exist in the database (as shown in the screenshot). The database architecture can't change. My question is: must I build only one (1) micro service that consumes these five(5) stored procedures? Or must I build five (5) micro services that consume each one of these stored procedures? I'm very confused about making that decision, because in most architecture notes about micro services, one micro service should exist only for one database but this way it wouldn't be a decoupled solution. But the other hand, if I build five micro services, each one using a stored procedure, it would be five services that share the same database or repository of data. Thanks a lot for your attention. | I get the feeling that some decision maker in your company heard about the buzzword "micro-service" and decided that you need to have them as well, regardless of if they actually solve a problem you are having. The primary reason why all notes talk about a separate database for each micro-service is because micro-services are intended to be independently deployable and scalable. If you find that one feature of your application is much more heavily used than the other features, then if each of the features is implemented by a (proper) micro-service, then you can just run more instances of that one heavily used service. And that can also mean having multiple instances of the database to keep the load on that part of the service within reason. With your constraint of a fixed database architecture, I would design a single service for that and, independently of whether that service meets all the checkboxes for a micro-service, call it a micro-service (for political reasons). In short, create the best design given the technical constraints and then slap on the buzz-word labels that people want to see. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436567",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/410361/"
]
} |
436,627 | The way I see used most frequently int main()
{
int i;
for (i=0; i<10; i++)
{
printf("hello\n");
}
return 0;
} The way I’m used to int main()
{
for (int i=0; i<10; i++)
{
printf("hello\n");
}
return 0;
} Questions Is the first way not cumbersome? What is the rationale behind it? Background I am helping my nephew with his coding; he is learning C. I’m able to understand some of it because of my experience with Bash. He is in a third world country, and the teachers just throw books at the children and leave them to it. I suspect they don’t know the material themselves. | Versions of C up to and including C89 (i.e. the language version standardised in 1989; note this was the last major revision to the C standard before 1999) allowed variables to be declared only at the beginning of a scope, which forces you into the style shown in your first snippet. If your nephew is using an older textbook (which I suspect is more likely in a third-world country), they may not have been updated to the new style. There is no real reason to declare the variable outside the loop these days . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436627",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/395602/"
]
} |
436,738 | I've just started learning about Inheritance vs Composition and it's kind of tricky for me to get my head around it for some reason. I have these classes: Person class Person
{
public string Name { get; set; }
public int Age { get; set; }
public Person(string name)
{
Name = name;
}
public override string ToString()
{
return Name;
}
public virtual void Greet()
{
Console.WriteLine("Hello!");
}
} Teacher class Teacher : Person
{
public Teacher() : base("empty")
{
}
public override void Greet()
{
base.Greet();
Console.WriteLine("I'm a teacher");
}
} Student class Student : Person
{
public Student() : base("empty")
{
}
public override void Greet()
{
base.Greet();
Console.WriteLine("I'm a student!");
}
} And I've been told this: Protip: don't use class inheritance to model domain (real-world)
relationships like people/humans, because eventually you'll run into
very painful problems. And I don't really get why. First of all, what does model domain mean? Also, if I shouldn't use inheritance, should I use composition? If so, how would my code look? And also, what are those "very painful problems" that can appear? | The problem I have with this model is that teacher & student are roles while person is a real entity. While this model will work in the short term, it will have problems if: a student becomes a teacher, or, if a teacher takes a course becoming a student (or also if a student graduates, and is no longer a student). Student & Teacher are ephemeral roles (played by people) whereas Person is persistent entity. Thus, an is-a relationship between Student and Person or between Teacher and Person is inappropriate. Also, if I shouldn't use inheritance, should I use composition? Yes, using composition will allow a Person's roles to come and go without having to create/destroy a new Person object. You just need to model that a Person object can have a relationships with role objects. If the role captures extra information (e.g. Teacher of what subject/classes), having role objects refer to person objects might make sense, and if you need to quickly identify all the roles a person has, then as set of roles within the Person object also makes sense. That model also captures Age which is a concept that is relative to now, which is constantly changing. This will also have problems over time — instead, capture a non-relative value like year born. First of all, what does model domain mean? A domain model has the purpose of being able to capture information in order to be able to later answer questions that you want to ask. We model for the purpose of providing automation of some (usually highly repetitive) task in the domain. We are not trying to recreate the domain within the computer, but instead to automate some portion of the domain. Perhaps just record keeping, or perhaps automating some part of assigning classrooms to classes, teachers to classes, students to classes, timeslots to lectures. If just record keeping, still need to know what questions & answers you want those records to be able to give. So, you want to identify what automation is intended, then identify what answers you want that to give, what decisions to make, then identify what questions to ask of the domain modeling, and what information has to be captured/modeled for these. Then, we attempt to model just enough for that: don't over model for things that the automation won't help with (for example, we don't need a plethora of classes when objects and fields will do) and yet model sufficiently that the automation works properly. We model so as to facilitate capturing information, so we can later ask (specific, known) questions of that information, get answers, and make decisions — all in support of some amount of the automatable portion of a domain. The overall automation design should determine what information to capture/model (and when and how), what questions to ask & when, what decisions to make & when. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436738",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/410724/"
]
} |
436,937 | Currently, I'm reading "Code Complete" by Steve McConnell, 24.3 chapter "Specific Refactorings". I've become kind of confused with 4th paragraph, that looks like: Move an expression inline Replace an intermediate variable that was
assigned the result of an expression with the expression itself. Does that inline expression decrease readability of code compared to a well-named (even if it intermediate) variable? When will it be recommended to use inline expressions and when intermediate variables? | Personally I prefer having temporary variables with explicit names (but don't abuse them either). For me: void foo()
{
int const number_of_elements = (function_1() + function_2()) / function_3(); // Or more complex expression
write(number_of_elements);
} is clearer than: void foo()
{
write((function_1() + function_2()) / function_3());
} But if you use temporary variables for something like: void foo()
{
int const number_of_elements = get_number_of_elements();
write(number_of_elements);
} This is less readable than: void foo()
{
write(get_number_of_elements());
} I am not the author so I can't know what he was thinking when writing this line but it is a possibility. The book being quite old (18 years for the second edition) there might also be historical reasons for that... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436937",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/405202/"
]
} |
436,939 | I have a class called SomethingProvider that contains: private static string convertMapA(string convertA)
{
switch (convertA?.ToUpper())
{
case "NONE":
case "TEST":
case "MEHTEST":
return "None";
case "MEH":
return "meh";
case "DAY":
case "SOMEDAY":
return "da";
case "CON":
case "C":
return "cd";
default:
return convertA;
}
}
private static string convertMapB(string convertB)
{
switch (convertB?.ToUpper())
{
case "ANNY":
case "ANYU":
case "BLAH":
return "something";
default:
return convertB;
}
} There can be multiple versions of this class, I.e ClassAlphaProvider, ClassBetaProvider.
Each method Convert will have slightly different mappings dependent on its class. And to make it more interesting there are multiple Projects , with multiple classes , with these convertMapping methods that alter per class. Now, I was considering, dictionary mapping instead or, implement strategy pattern but feels overkill for strings mapping? Maybe even Db implementation of sorts? What would be a good approach to refactoring this? NOTE: the case statements change frequently. trying to consider open/close with the idea that the case statements change a lot. | Personally I prefer having temporary variables with explicit names (but don't abuse them either). For me: void foo()
{
int const number_of_elements = (function_1() + function_2()) / function_3(); // Or more complex expression
write(number_of_elements);
} is clearer than: void foo()
{
write((function_1() + function_2()) / function_3());
} But if you use temporary variables for something like: void foo()
{
int const number_of_elements = get_number_of_elements();
write(number_of_elements);
} This is less readable than: void foo()
{
write(get_number_of_elements());
} I am not the author so I can't know what he was thinking when writing this line but it is a possibility. The book being quite old (18 years for the second edition) there might also be historical reasons for that... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/436939",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/99398/"
]
} |
437,454 | I will explain with an hypothetical example. Suppose that my domain is Cars. Everyone around the software, talks about cars. Car is the aggregate root of aggregate roots. For example, CAR table has 150+ columns in database (irrelevant for this question, but for you to imagine). So, there is class in the system called Car . This class has a lot of fields and behavior in it. Suppose that given a Car , we have to calculate its HorsePower . This calculation has its own rules and logic. But for the calculation only a part (some fields) of the Car is required. And I want to extract this logic from the Car class in order to make it more visible where this happens, how it happens and what information is needed to happen. So I end up with something like this, which is pure functional: public class HorsePowerCalculator{
HorsePower calculate(Car c){..}
} My doubt is the following. If I pass a Car object to calculate , I have 2 issues (according to my opinion). a) What HorsePowerCalculator needs in order to do its job, its not pretty straight forward. Because I pass the whole Car object, HorsePowerCalculator can access all (read) properties of a car. b) For testing HorsePowerCalculator in the unit test level, I need to create a Car object which is not a trivial job. Other things must be taken to consideration where they are irrelevant for HorsePowerCalculation. So, I was thinking that I solve the above mentioned issues by doing this: public interface HasHorsePower{
int getNumberOfPistons();
EngineType getEngineType();
FuelType getFuelType();
//HorsePower needs also non-cohesive properties.
//This is why I can't group the properties to i.e "Engine" class
int getNumberOfWheels();
} Then: public class Car implements HasHorsePower{
} And finally: public class HorsePowerCalculator{
HorsePower calculate(HasHorsePower something){..}
} After this, I gain: a)it is pretty straight forward what information is needed to calculate horse power. All property/methods of the interface b)Testing calculations and logic of HorsePowerCalculator means a TestDouble in the test suite with just setters/getters. Then just assert the calculation results. My doubts are that the only HasHorsePower implementation will be...well.. only the Car Is this solution a code smell, or let's say not optimal? What should I do in this case? | In the described context, there is some unstructured legacy code. Now to improve this situation, you add more structure to it by using classes and interfaces for creating sensible abstractions - just the same way like you do it by extracting functions or methods from other functions which have become too large over time. Where I work, we would call this simply "cleaning up the code", or "basics of software design" - that's pretty much the opposite of a code smell. A "code smell" could be a class which has grown too large over time and motivates you to refactor parts out of it. This can lead to functions, classes and interfaces which are just used in only one place, there is nothing special about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/437454",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/385136/"
]
} |
437,520 | class Foo {
abstract doStuff() {}
}
class Bar extends Foo {
doStuff() { ... }
}
class Baz extends Foo {
doStuff() { ... }
} From a functional perspective doStuff does exactly the same thing, however if it should be done to bar or baz then the implementation is completely different. I try to separate data from business logic, so as doStuff is business logic and not class behavior, it should be outside. class Foo {...} // accessors inside
class Bar extends Foo {...} // accessors inside
class Baz extends Foo {...} // accessors inside
class Manager {
doStuff(foo) {
if(foo instanceof Bar) {
return this.doBarStuff(foo);
}
if(foo instanceof Baz) {
return this.doBazStuff(foo);
}
throw;
}
doBarStuff(bar) { ... }
doBazStuff(baz) { ... }
} Usage of instanceof has always been considered as a bad pattern for me and I always try to use polymorphism instead. However by separating business logic from data, I found myself stuck. I could use a kind of accessor getType() and use a switch to avoid the usage of instanceof , but for me it is nearly the same issue as it seems to not use all polymorphism benefits. At the moment I have only 2 child classes and it should not be extended in the near future. So for now, I prefer to use instanceof instead of an accessor. People that read the child class will ask "What is it ?" about getType() because it is not related to data but it is a "flag" used by business logic switch. By using a trait or composition to write business logic, business logic will be write in different files but just make it more difficult to read IMO. What would you advise? Is there some other way I have not considered to fix it? | Your problem is that you're trying to use both rich domain models I always try to use polymorphism and anaemic domain models I try to separate data from business logic at the same time. Both are valid programming patterns, but they take fundamentally different approaches as to whether the business logic code sits in the domain objects or not - you can't mix and match the two. Pick one or the other and stick to it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/437520",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/412431/"
]
} |
437,529 | I was researching about best practices for standardised JSON response formats for APIs, according to various sources available online general consensus looks something like this: //Successful request:
{
"success": true,
"data": {
/* requested data */
},
"message": null
}
//For failed request:
{
"success": false,
"data": {
/* error data */
}
"message": "Error: bad stuff"
} My question is: what is the reasoning behind the "success" parameter inside the response body? Shouldn't the info about whether the request was successful or not be determined from HTTP status codes instead of additional parameters like "success"? Also, many HTTP clients, like axios , will throw exceptions based on response status code, which can simplify the handling of requests.
Example of using axios and status code exceptions instead of "success" parameter: axios.get('/api/login')
.then((response) => {
// The request was successful do something
}).catch(function (error) {
if (error.response) {
// Request made and server responded with HTTP status code out of 2xx range
console.log(error.response.data);
// Handle error json data in body
console.log(error.response.status);
} else if (error.request) {
// The request was made but no response was received
console.log(error.request);
} else {
// Something happened in setting up the request that triggered an Error
console.log('Error', error.message);
}
}); I would appreciate it if someone could give me a few reasons why the standard with "success" param inside the json response is so common. There is probably something important I am missing related to motivation for such an approach. | A few potential reasons why you may wish to do this are: the fact that some HTTP clients treat anything other than 2xx as an "exception" to be thrown, which can hide differences between transport errors (like no connection, invalid firewall, invalid endpoint/URL, etc.) and actual API errors (bad API key, request validation rules, etc.), which could both end up being thrown as errors, which leads to extra work in determining which it was and extracting the actual API error text responses that aren't accurately / completely represented by normal status code, or even multi action requests, where you have >1 API action in a single HTTP request and each one may succeed or fail individually, in which case either a 2xx or 4xx would not be accurate I personally prefer to inspect the JSON for errors instead of hoping that the language I'm using will easily differentiate between different kinds of errors. Java and C# for example would allow me to do multiple different catches of specific exceptions on a single try, but in JavaScript where anything can be an error and each try only allows a single catch, it's messier to separate transport errors from API errors | {
"source": [
"https://softwareengineering.stackexchange.com/questions/437529",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/412445/"
]
} |
437,777 | When writing unit tests, I feel that there is a trade-off between code repetition and test logic. Example of my current (likely flawed) approach: To test this function (overly simple function for illustration): from warnings import warn
class AmbiguousSignWarning(Warning):
pass
def product_sign(arg1, arg2):
"""
Returns 1 if product is non-negative and -1 if negative. Warns when product is 0.
"""
product = arg1 * arg2
if product >= 0:
if product == 0:
warn(AmbiguousSignWarning(
"0 can be considered both positive or negative. Treated as positive."
))
return 1
else:
return -1 I'd write something like: from warnings import filterwarnings, resetwarnings
from pytest import mark, warns
PRODUCT_SIGN_TEST_CASES = [
{
"description": "arg1 > 0, arg2 > 0",
"arg1": 1,
"arg2": 14,
"expected_warning": None,
"expected_result": 1,
},
{
"description": "arg1 == 0, arg2 > 0",
"arg1": 0,
"arg2": 14,
"expected_warning": AmbiguousSignWarning,
"expected_result": 1,
},
{
"description": "arg1 < 0, arg2 > 0",
"arg1": -12,
"arg2": 14,
"expected_warning": None,
"expected_result": -1,
},
# Goes on like this to exhaust combinations of signs of arg1 and arg2
]
class TestProductSign:
@mark.parametrize("test_case", PRODUCT_SIGN_TEST_CASES)
def test_product_sign(self, test_case):
if test_case["expected_warning"] is None:
filterwarnings("error", category=AmbiguousSignWarning)
result = product_sign(
arg1=test_case["arg1"],
arg2=test_case["arg2"],
)
resetwarnings()
else:
with warns(AmbiguousSignWarning):
result = product_sign(
arg1=test_case["arg1"],
arg2=test_case["arg2"],
)
assert result == test_case["expected_result"] What I don't like about this approach is that I basically copy paste test cases, modifying them slightly depending on what each tests. Copy-pasting is a no-no (modifying the function would likely end up in having to modify each test case), but how do I avoid code duplication without introducing more test logic? More logic means more potential for error (perhaps not in this simple example, but in more complex functions); at some point, I would have to test my test logic. My approach also bloats up the testing module as functions that have more complex signatures, or arguments end up using a lot of lines of code just to define the test cases. If this indeed is an unavoidable tradeoff, should I prefer duplicated (logic-free) code (higher maintenance) over more testing logic (sources for error), especially in a setting where correctness of code is vital? Some comments on why this design: I wanted to avoid logic as much as possible: single test with minimal logic tests all cases; developer doesn't need to understand the logic of multiple tests The code that is duplicated basically contains no logic, so downstream modifications may be laborious, but less error-prone All test cases are written as literals; minimal code to arrange test cases Addition and removal of test cases should require only looking at the variable PRODUCT_SIGN_TEST_CASES , seeing what dictionary key the other tests provide, and mimicking it. A test_case object is usually printed by pytest when it fails, so that inspection of "description" explains what is tested (since I don't have descriptive test name) Potential antipatterns I tried to avoid: Lots of testing code, requiring someone who reads the code to understand various test structures Need for testing unit test code due to complex logic I couldn't find my answer here: Is it OK to repeat code for unit tests? Is it bad practice to repeat logic being tested in unit tests? Is having some logic in source code in order to perform some tests a good practice? Also, I looked at some repositories on GitHub and found them either to have too much testing logic for my taste, or repeated testing logic achieving the same as I do with repetition in the PRODUCT_SIGN_TEST_CASES variable (example: https://github.com/scipy/scipy/blob/main/scipy/linalg/tests/test_decomp_cholesky.py ). | The problem is that the test has any logic at all. Conditionals and looping structures indicate the test is doing too much. Like jonsharpe said in a comment, this test needs to be split into at least two parameterized tests. To promote this separation, create two different variables holding test data. One set is the "happy path" and the other set is for the "unhappy path". The advantage here is that you can give each test data variable a relevant name. The more focused a test is, the easier it is to maintain as the application evolves. Rather than parameterize the warning tests, consider writing specific tests with a good name to verify each kind of warning. Data-driven tests are useful for boundary testing, where a range of values are valid and each value in the range has the same expected behavior. Tests should verify only one behavior. As soon as they verify more than one behavior, the test has too many reasons to fail. Conditionals are an immediate sign the test has too many reasons to fail. Refactor the test into multiple tests and give each test a good name. Tests that make more than one assertion should be inspected to ensure the assertions are related to the behavior under test. Assertions unrelated to the behavior being tested should be refactored into their own tests. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/437777",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/412985/"
]
} |
437,823 | A few days ago I stumbled upon a silly "problem" that made me reflect about encapsulation and OOP design. I have a class called User that has a method hasMinimumLegalAge() that checks if the user is legal (above 21 years) to perform some operations in the system: class User {
...
private static readonly MINIMUM_LEGAL_AGE: number = 21;
private birthDate: Date
...
hasMinimumLegalAge(currentDate: Date): boolean
{
//perform date difference between today - birthdate, check is user is above MINIMUM_LEGAL_AGE.
}
} In order for the method to be able to tell if the user is legal, it needs the current date to perform the date difference. I have two options for this: hard-code the Date object inside the method or receive the current date as an argument. Approach 1 (hard-code the Date object inside the method): The problem with this approach is testability and coupling, I'll not have control over the date, thus i cannot easily test some scenarios, and my test will only be true for some time, but i gain "encapsulation" because know my method is able to respond to the message without any arguments (that seems ideal for me, in this case) Approach 2 (receive the current date as an argument): it's kind of weird to me to send a message "are you legal?" and then the method responds: "first I need to know what date is today", I feel like the object should have the necessary data to respond to this question, it feels like encapsulation is leaking because the method is depending on external information, even knowing that hard-coding the Date object is not ideal, but is it worth only for testability? what are your thoughts? | Always pass the date as a parameter, otherwise you cant write tests. From a modeling perspective, "current date" is something outside the user, shared by the whole system. If current date was encapsulated in the User object, then each user could have their own individual current date, which does not make sense. Practically speaking, a function which tells if a User is of legal age on a given date is much more useful than than a function which can only tell if the user is legal today . Lets say the user put a reservation for a table at a bar on some date in the future. What matters is if the user is of legal age at the day of the bar visit, not if they are legal "today". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/437823",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/272737/"
]
} |
437,870 | Let's say that I have an articles table that have two columns: title and content , and let's say that this articles table doesn't have any relationship with any other table. Why would this articles table have a primary key? I mean, what problems could I face if the articles table doesn't have a primary key? Having identical rows is not a problem because two or more articles can have the same title and content. | The main benefit of a primary key has nothing to do with foreign keys. A primary key allows you to identify a single record in that table. Presumably, the system will have multiple articles. If all your application ever does is show a list of articles, then a primary key won't be much use. As soon as you want to show just a single, specific article , the primary key becomes mandatory. When showing a single record to the end user, do not assume an index within the result set is enough. Consider a case when a user chooses to view article number 2. While viewing the list of articles, someone adds another article. Depending on how you sort the result set, showing "article 2" might end up showing article number 3. Primary keys are also necessary for discrete, accurate updates. The primary key would be a discriminator value used in the UPDATE statement in order to ensure you don't accidentally update the wrong record (or no record at all). update articles
set ...
where id = 5; Same thing for DELETEs. You need primary keys on a table if you want to reliably: View a single record Update a single record Delete a single record Changing data without referencing the primary key value is risky in most use cases, and I do not recommend doing it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/437870",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/247763/"
]
} |
437,921 | Preface, TL;DR This question is about the implied tradeoff between the speed of development and the quality of code. I am looking for methodologies which can be followed in order to optimize development speed , even at the expense of code-quality and the accumulation of technical debt. I am not looking for ways of communicating the importance of code quality to those in charge. Please assume everyone understands the importance, but have chosen to optimize for speed of development and that they are correct in that decision. I have learned a lot about how to optimize for quality, but nothing about optimizing for speed of development, and am looking for references. My personal experience taught me quick and clean go together, which is not what I see around me, making me believe I am missing something very fundamental. Code in real life isn't good I keep encountering code during my work that is less than optimal.
By that I don't mean simple improvements could be made, I mean: Lack of encapsulation ("what" mixed with "how" on several levels) Multitude of duplicated conditionals over different modules Under-abstraction Wrong naming Lack of tests Any other poison you can come up with This is way beyond things that a short refactoring session can remedy. I see this over and over across many software engineers, across several seniority levels, and across several (startup) companies. When talking to colleagues outside the office about it, the vibe is "The code is always bad. You have to learn to deal with it" (which I am trying to do now). When talking gently with colleagues from the office, they are either not aware of the problems (mostly juniors or non software majors), or they excuse it by "we had to get it done fast". I am beginning to believe that if so many seniors claim they optimized development speed at the expense of code quality, they have some secret method of getting it done, repeatedly, which I somehow miss. Otherwise, how did it get to be like this? I mean, this has to be done deliberately , right? No one wants tie both their shoes together, though sometimes they have to. Even then, there has to be a correct and a wrong way to do it. I believe this is code in real life. I read a lot about good code. For example, The Gang of Four , Clean Code [non affiliate link], lectures , and SOLID . I observe that these practices are considered by many as nice to have, but are not followed in practice, especially when under the pressure of business, or if people don't code as a passion, and don't extend their knowledge or notice patterns by other skilled programmers. This tendency of code not being perfect will probably be amplified on my own career path, which is leaning toward algorithm development, and away from pure software engineering. What do I do today? When I write code " from zero ", it is very easy for me to implement the good-code principles, and I tend to be faster as I write good code. I tend to follow my own code much better, thinking less and typing more, having fewer bugs, and able to then being able to explain to myself and others what's going on, when it is organized. I feel this makes me faster, as most of the time coding goes for thinking, and writing tests, which good code minimizes, and encapsulates. When handling existing, good code, there is a slight learning curve, but then I can treat it quite easily, and not worry that I might break things. I can be quite sure I am on the road to being done and explain what's going on and estimate how long it will take. When handling existing bad code, I tend to: Not know what I am going to break with my changes. Not know how code is structured and have to read the whole thing prior to writing, which is virtually impossible due to wrong naming and no encapsulation, no external documentation, and generally asking people results in "oh, yeah, there's also that case" if they are still even around. Start to build encapsulations myself, just to know that when I'm done, I am really done, and that I didn't break anything. Also this allows me to not break my own changes elsewhere. This is a really slow process, relative to the person who wrote the code, who knows by heart all the pitfalls in the code. This is much slower than the boss expects. I usually can't give a time estimate for my work prior to going into such code. Many times, I only find out about the code quality after having spent some time in it. When being asked to code "quick and dirty", I have no idea what to do in practice. I can code quick and clean from zero or slow and clean. I want a tool in my arsenal, to be able to code quick at the expense of dirty, to be able to choose when to use it. This topic seems clear to everyone, but no one actually talks about it. Bosses and colleagues, mostly at startups, all say once in a while "quick and dirty" or "do it faster; we will repay the technical debt later", or "just get it done; deal with it later". I never hear people talk technically or methodically about how, in practice, to favor quick and dirty over clean and slow. As I view it, quick and clean go together when "actually working code" is implied. How can I get all three done together, in new code, and in existing code? Deliver faster than when writing clean code (implied doing something dirty) Writing working, tested, explainable code (sleeping well, not worrying production servers just broke because of me) Working regular hours I understand how to write good code. I want to find out about methodologies of being faster at the expense of quality. I am looking for methodology, rules of thumb, and book or lecture references. | Whether one works clean or dirty is more a question of developer attitude and abilities, and the same holds for coding speed - this is rarely a deliberate decision people make. Of course, there are devs who appear to work overly slow because they tend to be overly clean. But I have never seen a dev in my life who was really quick because their code was dirty. I have met devs who believed they were quick, but their dirty code haunted back at them the first time a tester or user tried to work with their mess. These devs may got the code quickly out of the door, just to get an even quicker phone call from a person who stumbled over the first few errors and made them clear their work wasn't finished. Hence the idea of a tool in my arsenal, to be able to code quick at the expense of dirty, to be able to choose when to use it is IMHO flawed. The sweet spot for being quick is not "dirty", the sweet spot is found by being clean enough, but not being excessively clean invest enough time into proofreading and testing, because saving time by leaving these steps out never works knowing when to stop with unnecessary abstractions knowing which requirements have to be solved now , and stop worrying about "requirements which may or may not arise within the next ten years", which one actually cannot foresee. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/437921",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/329411/"
]
} |
438,417 | We run a deployment pipeline where we build a versioned binary, tag the commit it was built from with the same version as the binary, and then can deploy the binary into arbitrary environments (typically qa, then live). I'd like to keep a record of which commit is currently deployed in an environment as a git tag or branch on the canonical remote repository. I'm imagining a branch or a tag called live (or prod or whatever) and a deployment process that on successful deployment moves that tag / hard resets that branch to the commit that is deployed, so in principle you can just do a git pull && git checkout live and (race conditions aside) you're looking at the code that is currently deployed in live. However, neither tags nor branches quite measure up... Branches might feel more correct in that they are pointers that are meant to move between commits. However, in a world where people can roll back, a branch with its assumption of always moving forward doesn't quite fit. Resetting the remote live branch to an earlier commit will mean that a dev checking out live may be ahead of the remote branch and will need to git reset --hard origin/live to get back to what is actually in live. I can also imagine a situation where pulling might present you with a nasty merge conflict. We'd also need to protect the branch as it would not be intended for developers to commit and push to it. On the other hand tags aren't really designed to move; if you move a tag on the remote repo, a git pull --tags will fail as so: ! [rejected] live -> live (would clobber existing tag) unless you do a git pull -f --tags . Still feels like tags are slightly the better option, as they convey the idea that it's just a marker, not a work in progress. Does anyone have a view either way? Or is there a Third Way of some kind? Or is trying to do this just a bad idea? We bake the git hash into the binary and allow easy reading of it, so it's not a huge hardship to go to the environment and find out which commit is deployed there. It would just be convenient to be able to see it in the git log. | This is not a problem that version control was meant to solve. Recording which commit is deployed to which environment is an artifact of the build process. If using an automated tool for build and deployment, check the features it supports. Most tools support a naming convention for builds. Many of them integrate with Git and other version control systems to associate a build or release with a commit identifier. Typically a new tag is created for each deployment, something like 1.0.3 if using semantic versioning . Part of the build process could record the commit Id as part of some config setting or version number that becomes visible through the application. For a web application, this could be an HTML comment or some static text pulled from a config file. Web APIs could have an endpoint that returns the current version information, including a git commit Id or tag name. Version control is meant to record changes to source code. The commit Id deployed to a particular environment is transient meta data about the larger system that is out of scope for version control. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/438417",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/414356/"
]
} |
438,534 | An external consultant to our team advised us to rewrite our SaaS offering (essentially a CRUD API) in .NET because this is more "scalable" than using Node.js (or Ruby on Rails , Flask , etc.). By that they seem to mean that a backend API written in .NET will handle the growing performance requirements much better than a backend API written in a scripting language like JavaScript, Ruby or Python, once the startup scales to tens of millions of users (an ambitious dream). To me this seems wrong. The performance of a CRUD API should be completely dominated by the choice of architecture and hardware instead of the programming language. Am I right that switching programming languages will have little impact on the scalability of a CRUD API? | At a global level, you're wrong - language does matter, or at the very least you will spend more $$$ on compute if you write it in a less computationally efficient language. While I'm not at liberty to go into details, I work for Disney Streaming and it's well known we're a Scala shop. If our highest scale services were written in (say) Node.js rather than a JVM language, we would be spending significantly more each month on the containers/servers we need to run our services. Our services are often compute bound, so there does just come a point at which raw language efficiency does matter. That all said, you don't have as many users and as much concurrency as we do. If you had to run twice as many servers, would that have a significant impact on your business? If not, you're probably right to keep working in the languages your team is used to. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/438534",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208955/"
]
} |
438,553 | I have been using an external library for a while now. Unfortunately, it stopped receiving updates, and has since been adopted into my codebase. The issue is the library was poorly documented in the first place. It was picked out of necessity and because it was the only option. Now my question is, how should I document it and how should I go about adopting it into the codebase. The library works well, but the code standards are not up to par with the current codebase and the documentation leaves a lot to be desired. More specifically: Is it worth refactoring the entire library, or extracting only the portions used? Should it be considered SOUP going forward or fully adopted? Is it worth documenting everything, or only the portions that need to be used? I recognize that these answers will vary on a case by case basis, but I am wondering about the general best practice in this scenario, or some questions to ask myself when making these decisions. | At a global level, you're wrong - language does matter, or at the very least you will spend more $$$ on compute if you write it in a less computationally efficient language. While I'm not at liberty to go into details, I work for Disney Streaming and it's well known we're a Scala shop. If our highest scale services were written in (say) Node.js rather than a JVM language, we would be spending significantly more each month on the containers/servers we need to run our services. Our services are often compute bound, so there does just come a point at which raw language efficiency does matter. That all said, you don't have as many users and as much concurrency as we do. If you had to run twice as many servers, would that have a significant impact on your business? If not, you're probably right to keep working in the languages your team is used to. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/438553",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/414694/"
]
} |
438,897 | Doing TDD in a kata is simple. A red test, small amount of code, green test, and refactor. Repeat. And that's it. But, I work on a real application. With a REST controller, a service layer for business logic, a database layer and a converter from entity classes to data transfer classes. How can I start TDD in such cases? When a user story asks to add new functionality (like a new endpoint to do a complex operation on data) I don't know where and how to start. Do I need to start from the controller? But it will do nothing and only call the service layer (so in test I will end by setting up a mock, I think). Do I need to start from the database layer? | Behavior Start with behavior. Don't focus on structure. What matters is some complex operation is supposed to transform some old data to some new data. You can create a test by finding examples of the old and new data. Where that data is stored, file, DB, or memory, is an implementation detail. It doesn't have to leak into the test. Keep that out and you can change how the data is stored without having to touch the test. Fail to keep that out and the tests will actually make refactoring harder. You may find some way to decompose the complex operation into multiple testable steps. If you need that to diagnose errors go for it. Don't feel like TDD demands it though. This may make diagnosing a problem go faster but it locks down implementation details. Now the 3 step complex operation has to be a 3 step complex operation. If you ever figure out how to make it a 2 step operation you'll need to come back and remove some of these tests. Removing them not only improves flexibility but speeds up the test suite without costing you coverage. But once these micro managing tests have been created they tend to stick around. Consider giving them somewhere harmless to be where they wont be run unless needed. Just find some way to get them out of the main suite of tests. Yes code katas are simple. But TDD expects you to take your real application and break it down until the part you're testing is simple. Then you build on that by adding more and more. Yes, that will change the code you can write. That's the point. Now that said, there is some dry boring structural code that doesn't need to be wrapped up in a test of its own. Test the interesting behavior. Mocks Does that mean never mock? No. Tests need the "unit under test" ( which is not necessarily just one class ) to be unit testable . That is, the unit should be fast (run-on-every-compile fast), and deterministic (always does the same thing), always ready for testing (no configuration magic) and should not care about whatever else is running (parallelizable). The best argument for mocking is that the unit won't be those things without the mock. Another argument often made for mocking is to confirm that something was called. This is thorny because sometimes it's right and sometimes it's wrong. If we're testing behavior it's none of the tests business how the unit gets it's work done. Period. Full stop. Except... well sometimes that method call is the behavior of the unit. What gives? Many tests are written in a strictly functional style. Input goes in the arguments. Output gets returned. And side effects are evil! Avoid at all costs! But TDD is used in codebases that aren't purely functional. And sometimes, just sometimes, a unit doesn't return it's output . Instead, it calls a method on an output port . One way to test such units is to mock that output port . Those are my two excuses for mocking: to improve the testability of the unit or to capture the output to test. Being mockable is not a good excuse to mock. Granularity But some say: every class has an interface. Every interface should be tested. And in isolation or it's really an integration test. Then I say: I've worked in shops that insist on this. I understand the urge to not trust developers to do testing properly and the desire to have rules that are easy to verify. However, us lazy programmers are often smart enough to realize that in such an environment the lazy thing to do is to just not create many classes. Solve the problem procedurally and you can avoid writing the explosive number of tests this philosophy would demand. No static analysis tool will ever catch you deciding against extracting a class because your shop made it too much of a pain. In short, there is no substitute for developers who care about doing this right. Rather than demanding we conform, inspire us to care. This isn't to say you can't test very granular behavior. I'm just saying the way to identify that behavior isn't by ensuring every single class has a single test class. Sometimes a class is the interface for many classes. Types of tests As for the unit vs integration test distinction, I've seen them defined many ways. The most useful definitions will give you two separate piles of tests. Fast ones that you can run with every compile and slow ones that you can run with every merge. Keeping those separate is important because nothing ruins a fast test suite like a slow test. I don't really care what you call them. The point of a test A passing test should make you feel like you can trust the unit to behave. You should feel like you don’t need to read it. You should feel like you trust it the way you trust your languages print command. That should free you to focus on the suspect code. Write tests that will make you feel that way. Conclusion TDD isn't everything. There are many other successful ways to develop. And you can successfully mix them. But if you feel like TDD only works on toy katas you need to play with it more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/438897",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/206191/"
]
} |
438,945 | My workplace's database has a pattern that I've not seen before. Every column that is intended to be a key, whether primary or foreign, ends in _SK . This is shorthand for "surrogate key". It appears to be an informal way to tell the developer that said column is safe to use in joins and won't have any type mismatches or unexpected behavior. For example, our table of dates has many columns that represent the date. DATE_PLAIN is the typical SQL DATE variable that shows data ISO style (e.g. 2022-10-30 ), DATE_VENDOR puts the date in the style that our vendor uses (e.g. 44300 ... their epoch is weird), and DATE_SK is always an INT that uses the familiar 20221030 format. By reading these column names, the pattern immediately tells the developer that DATE_SK is the one that you want for joins. Someone who uses either of the other two options in joins will run in to type mismatches and trouble (I learned that the hard way, e.g. our vendor inconsistently stored their dates as INT and DECIMAL ). This strikes me as a remarkably good idea, which raises the question of why I've not seen it before. Is it a known anti-pattern? | The use of surrogate keys is by itself not an anti pattern. It is a way to create a stable primary key for an entity, that will never change and not depend on any application data. But there are three anti-patterns in the practice you describe: Using a suffix _SK to identify potential surrogate keys is a variant of the Hungarian notation , with all its drawbacks Calling surrogate key DATE_SK that bears value such as 20220531 is misleading since it is in reality a natural key with a special encoding, so just the contrary of a surrogate key that should be completely unrelated to meaning to the data it refers to. Keeping several DATE s columns in a same table with different encodings to refer to the same date seems to be a denormalization, with the risk of inconsistencies. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/438945",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/373159/"
]
} |
439,396 | Given a class as follows: class IntList {
public:
IntList(std::vector<int> list) : m_list(list) {}
std::vector<int> list() { return m_list; }
std::string toString() {
std::string repr;
for (const auto integer: m_list) {
repr += "int:" + std::to_string(integer) + ";";
}
return repr;
}
private:
std::vector<int> m_list;
}; Now I have an unrelated function initializeIntegers() whose I want to test the returned value. Is it acceptable during unit tests to use toString() to compare the expected values of the object? void testCase() {
const auto intList = initializeIntegers();
assert((intList.toString() == "int:0;int:11;"));
} It seems weird to me, I would rather compare the value returned by list() instead. However, I can't find any reference stating whether or not using the string representation is OK. Am I wrong if I suggest to not rely on string representation during unit tests? Do you know of any authoritative references explaining it? Note that both list() and toString() methods exist and are used regardless of the unit tests (they weren't added just to be able to test the class). | Let's say you check whether two values are equal. Your check can go wrong in two ways: The values are equal, but the string representation is not (this is unlikely). The values are different, but the string representation is the same (say the values are 0.048 and 0.052 and both are represented as the string "0.05"). The second misses a test failure, and worse, it misses a subtle test failure that you won't pick up otherwise. The first case is reasonably harmless - it's annoying because you have a test that fails but shouldn't so you fix the test. Where it gets really bad is if your test should compare three values a, b, c and you totally missed that your text representation only contains a and b. Now c is completely untested. So you need some very, very good unit tests for toString() first if you want to rely on it in a unit test. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/439396",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/192965/"
]
} |
439,397 | Context : we operate in a highly regulated industry (medical), and aim to have automated test cases to cover all of our requirements - allowing us to still release quickly, but safely. We have a requirement or acceptance criteria that reads something like: x object should be read-only for users Editing this object not a piece of functionality that is available in our web application (or via an API) - this state can only be created by the backend (kotlin) application itself, but it is important to do what we can to verify this, (and ideally in an automated way). The problem : how do you test for the absence of some functionality? Our current thinking is similar to this answer, specifically: tests are just examples and not a proof Therefore all our tests are examples of a sort, and it's acceptable to have a slightly wooly test, for example the absence of an edit button. It's likely that if we weren't in a regulated industry we'd not put as much thought into it, and accept that you have to trust the design to some extent for this type of requirement. Some good thoughts from below (thanks for all the decent discussions): Code reviews/verification: yep, we do this API testing: testing the resource returned is read-only is something we do Security testing: absolutely will do this | For a web application, this is actually easy. Presumably there is functionality to edit these objects (e.g. for maintainers or admins), and this functionality must somehow map to a system state change when certain requests are received from the web app. All you have to do is verify that when the server side receives such a request in the context of a user session, the state change doesn't occur. (If you also want to verify that the object doesn't look editable in the front-end, that is a pure front-end test like other GUI tests. But the more important thing, as always in web applications, is to ensure that users cannot effect a particular change, no matter whether they issue the request through the official web client or by circumventing it.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/439397",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/416582/"
]
} |
439,843 | One thing I've long struggled with being able to grasp properly is, when designing a program in an object-oriented language, where and how should explicitly named/defined interfaces be added? In particular, I have heard at least the following statements on the matter: The "SOLID principles", of which the last three (if not four!) at least are only applicable in the presence of interfaces, but it may be possible to interpret "D" as a stipulation of requirement of an interface. "Always 'program to an interface, not an implementation'" - while this doesn't necessarily require explicitly using a interface-class inheritance but can be understood as "program to a specification of functionality not a way of implementing such", it does seem that in practice it tends to result in explicit interfaces (though this may depend on language), and finally "Loose coupling" - this is perhaps the most overtly "interface-requiring" rule/stipulation that I can see, in that two things are loosely coupled precisely if that there is some intermediate abstraction between them so that changes in one are insulated from changes in the other. However, if one tries to follow all these principles scrupulously, then it seems one must inherently end up with a large amount of explicit interfaces in a program of any nontrivial size. Now that's not necessarily "right" or "wrong" by itself, but it then leads to the following observations that have chronically created doubt in my mind: Virtually nobody I've seen actually writes a program this way. That said, pretty much all code I've ever seen in my entire life has been that released for public dissemination so either open source projects or a minor section of formerly-proprietary projects. If the majority of code in the world is proprietary and this is where the most "well-designed" code can be found (which might make some sense, i.e. the best coders go to work for the best code houses which will naturally try to very closely and jealously guard their code to maximize profitability under existing business and political paradigms), then it may be natural I have never seen it; but It seems in many very natural cases within the code itself the presence of interface is self-defeating because you have to access the explicit class in the same place you use it anyways. For example, suppose that C++ std::vector inherited an IVector interface (note: something like this actually exists in the C#/.NET, which I've also used - there, System.Collections.Generic.List inherits System.Collections.Generic.IEnumerable<> ). Being rigorous, to keep coupling loose, I should ideally use IVector , for there might come some reason to create a second class implementing IVector (maybe I want a vector that's implemented differently, say that is actually a list, but has the same interface and I don't want to rewrite those code stretches). However, if a data member of a class I'm making is a vector (very common!), it has to be created somewhere, and it "feels silly" to always dependency inject a purely internal storage buffer that nobody on the outside necessarily needs to know explicitly, so then it's simplest to write my Whatsit 's constructor to simply construct an std::vector directly. But then the Whatsit class already "knows" now about std::vector , the concrete class, and so it seems kind of silly to be trying to keep treating it only as IVector when the "cat's already out of the bag", so to speak. One thing that I do know is an antipattern is where you always write a class and identical interface, e.g. every class is intentionally paired 1:1 in a doublet IWhatsit / Whatsit . However, that doesn't seem to really answer the questions in my mind, because while yes, sure, this may be the case, we have plenty of guidelines in the above 3 for how to design interfaces - in particular SOLID's O, I, and D principles do a good job (L is more for implementors). I don't make IWhatsit but instead analyze what my callers who receive Whatsit objects need, and then make interfaces accordingly. The problem is really the first 2 considerations. Again, none of these are really contradictions - they're just things that "don't feel right", "design smells" maybe. So how should one best think about these issues to develop a good, solid and consistent way of thinking and working with interfaces and other such explicit language abstractions? Esp. given my chronic indecision and obsessive analysis of this topic has led many a project to never be completed due to constant rewriting/revision on trying to get the understanding of these principles right despite extensive reading of countless online pages, discussions, code bases, and even some books. (After all, while "anything consistent is good" may not necessarily always be true, it does seem much more likely to my mind at least that "anything in consistent is bad " is much closer to universal applicability, i.e. switching between [the fictitious] IVector and std::vector just because one happens to "feel" more right that day than on a previous day, in the same project, when it "seems" like it could go either way.) | I don't think there's really a simple answer that can do a substantially better job than the statements of the principles and practices that you've already encountered. I know you're aware of this on some level, but I think what contributes to the confusion is that the word "interface" got a very specific connotation thanks to the popularity of Java, and later C#. There, "interface" denotes an "interface type", a language-specific notion declared via the interface keyword. But the ideas underlying these heuristics and principles are applicable across languages , and in fact, many of them are not fundamentally OO-specific (although some of them are often expressed in OO terms). An interface has another meaning that's more fundamental, and that predates the one stated above. An interface is the client-facing "API" of a component. A concrete class has an interface - it's the set of its public methods and properties. A function's "interface" is its signature (return type + parameters). A concrete or an abstract base class, by virtue of having an interface (as any other class does), defines a polymorphic top-level interface for it's descendants. The purely abstract class (or the Java/C# interface ) is just a special case of that. A C# delegate (or a function-typed variable, or a function pointer) can be seen as a polymorphic interface for a family of functions. The aggregate root in DDD defines an interface for the whole aggregate. A module's interface is comprised of a number of classes and/or free functions and (parameter or return) types meant to work together. And so on. Interfaces are about defining how other code should interact with an object (or some other construct), and about separating that "contract" from the internals, with respect to client code. The principles / heuristics you listed (SOLID, "program to an interface", loose coupling) don't use the term "interface" in the narrower Java/C# sense. These are not absolutes - there's a bit of an art to it "Always 'program to an interface, not an implementation" "Program to an interface, not an implementation" is largely attributed to the authors of the 1994 Design Patterns book ; the idea itself probably predates it. There they introduce it by discussing the benefit of programing against an interface (in the traditional sense of the word) defined by an abstract class. Note that the original statement doesn't contain "always" - it's not a command, it's advice. It exists in a larger context of design considerations, and you have to make a judgement-call regarding the extent to which you want to apply it. The "SOLID principles", of which the last three (if not four!) at least are only applicable in the presence of interfaces. I'd like to point out some things. The DIP does not actually use the word "interface". The term used is "abstraction". There's a reason for that: an interface is just one kind of abstraction - one that you'll commonly make use of, but not the only one. For example, consider the Strategy Pattern - you implement an overall algorithm that defines, dependency inversion–style, an abstraction (what's called a required interface) for specific strategies. Client code then either has to pick an existing strategy to inject, or to provide its own implementation of that interface, in order to make use of the algorithm. Now consider many of the LINQ methods in C# - let's take the Where as an example. Where defines a generic filtering algorithm and it requires a predicate lambda that provides an externally injected filtering strategy . Your code has to either pick an existing predicate or roll its own. You see how, in terms of the overall structure, it's exactly the same as the strategy pattern? Yet the abstract strategy interface here is not a traditional interface at all. This also provides an example of programming to an interface, and of the dependency inversion principle. Microsoft engineers that implemented this method had no way of knowing what kind of collection you'd be filtering, or how you'd want to filter it. Their code has to call your code, but cannot depend on it. Instead, both their code and your code depend on two key abstractions - one is the IEnumerable<T> , the other one is a Func<T, bool> and its associated predicate semantics. One could ostensibly argue it's also an example of the interface segregation principle. In a silly hypothetical scenario, you could imagine a generic filtering algorithm that required an IFilterableEnumerable<T> , where, to use the library, you'd have to wrap a collection into this "augmented enumerable" that, when iterated over, can tell the library code if the element should be kept or not. That would be so cumbersome to use, and much less flexible. Instead, the Where method takes as its arguments two separate things - it segregates its dependencies into two concepts with clearly defined roles and a narrower set of responsibilities (remember, it's an extension method for IEnumerable<T> - which is actually just a static method that takes an IEnumerable<T> as its first parameter). Note also that this segregation does not prevent you to use our hypothetical IFilterableEnumerable<T> for both parameters. Again, a silly example, but you'll encounter such overly bundled interfaces "in the wild". Similarly, many functions in C++ <algorithm> library rely on a number of abstractions such as iterators, execution policies, and various other things like predicates and comparers. All those define/provide interfaces in this broader sense. Now, what I've been talking about so far is all library code, but I think you can see how you can apply these same principles in your own code when you have a layered or a modularized ("componentized") architecture, where you want to control the coupling and the direction of dependencies. But wait, there's more In the same vain, LSP is not about Java/C# interface -s at all. It isn't fundamentally about interfaces in the broader sense either. And it isn't about inheritance per se (although that will often be the mechanism for subtyping). As originally stated by Barbara Liskov and Jeannette Wing , it's about types and their behavioral specification , and it defines what it means for something to be a subtype. In today's context, that something can be a derived class, or a lambda/function injected into some wider context, or a JavaScript object that doesn't inherit anything at all , but can still be used polymorphically because of duck-typing, or any such thing. Essentially, LSP states that something can be considered a subtype of some other type if it can be shown that it exactly adheres to the abstract behavioral specification associated with that other type. The abstract behavior does not need to have a preexisting implementation to be well-defined; it's not about what the code actually does line by line, it's about what a sensible implementation should look like in a given context. For example, suppose you write some library code, meant to be called by others, that makes use of the IComparer<T> interface via dependency injection. The key point is that not all of the semantics of this type are encoded in the interface itself; the languages we use are typically not expressive enough for that. This is why we write documentation, and things like unit tests (this is the sense in which unit tests are a runnable specification). Your code will expect any implementation of IComparer<T> that's passed to it to exhibit certain sensible behaviors, the exact nature of which will depend on what your code is actually meant to be used for. E.g, it might require that, if an implementation's Compare method indicates that a < b and b < c, it should also indicate that a < c, for any a, b and c. In fact, suppose your code relies on this, and some other such "sensibility constraints", and that you've described this in your library's documentation. This defines an abstract behavioral specification for an IComparer<T> with respect to your library code - even though IComparer<T> by itself has no implementation (and does nothing in the literal sense). If someone calls your library with an implementation that doesn't adhere to what you've specified, they've broken LSP with respect to your library (or with respect to that specification). Their code is not substitutable for an IComparer<T> in that context. The behavior of your code when given a non-compliant IComparer<T> implementation is undefined . If they choose to pass such an implementation to your code, it may produce unexpected results, or it may crash. Or it may work and do exactly what they wanted by chance - until you release the next version, and an entirely internal change breaks their code. Note also that nothing in this scenario inherently requires the comparer abstraction to be an interface - it could, say, be a concrete class that provides a default implementation, that you could inherit from, and override. The substitutability terminology fits better in this context. (Yes, this would be less flexible since C# only allows a single base class, but I'm talking in principle.) In fact, in some languages and circumstances, the abstract type, if simple, might not have a direct representation in the code at all - e.g., the singature of the compareFunction parameter used by JavaScript's Array.sort is only specified in the documentation, along with the associated semantics (whereas in TypeScript or C# you'd have an explicit type for the function). Where is all the well-designed code? "If [... proprietary code ...] is where the most "well-designed" code can be found" I doubt that. To me, it looks like the situation is a little different. It's not that superbly designed code is hidden behind closed source, it's that it really is not that common - because of a number of factors. One is that these principles are not understood by the overall coding community as well as you might think. This is hard stuff , deeper and more involved than it looks on the surface. This state of affairs is perhaps not that surprising because of the boom that the software engineering industry has experienced. The huge influx of new people meant that at any point in time the percentage of really experienced people was tiny - and that it was hard to proliferate knowledge with the appropriate amount of depth (as this requires both reach and time). It also means that we keep reinventing the wheel. If you try to find information online, there's a lot of confusion and inconsistencies, and misconceptions that you have to wade through to get to the good bits. The other one is just the practicalities and pressures of everyday business. When you need to ship the product, and you don't really see a path towards a more elegant design, and the deadline is looming - you ship the product. What should be done instead is a broad and difficult topic that has been and will continue to be the subject of many discussions and differing opinions. And, ultimately, even the best programmers are just mortal humans - they are not going to produce stellar code all the time and in every circumstance. And not all projects require the same level of design (e.g. a one-off tool that you'll never update again is not going to benefit from an elaborate layered architecture). It comes down to developing a deeper understanding "However, if a data member of a class I'm making is a vector (very common!), it has to be created somewhere, and it "feels silly" to always dependency inject a purely internal storage buffer that nobody on the outside necessarily needs to know explicitly, so then it's simplest to write my Whatsit 's constructor to simply construct an std::vector directly. But then the Whatsit class already "knows" now about std::vector , the concrete class, and so it seems kind of silly to be trying to keep treating it only as IVector when the "cat's already out of the bag", so to speak." You're absolutely right. You would not always and systematically dependency-inject things just for the sake of it. This is why you have to know (or decide, or design ) what your class is for . What its contract is - what's client facing, and what's internal. If the std::vector is something that's purely there to support the internal details of your implementation, then by all means, create it internally. If your class (or a method, or a free function) provides some higher level functionality that needs to be reusable with different kinds of collections, then inject a pair of iterators. Heck, you can have a scenario where you inject the iterators, but also create an std::vector internally to maintain a local copy of some range, or to use as a temp storage, or whatever. One thing that I do know is an antipattern is where you always write a class and identical interface, e.g. every class is intentionally paired 1:1 in a doublet IWhatsit / Whatsit . Yes - and, notice, there's a theme here: if you systematically follow a practice without understanding the reasoning behind it, you end up creating spaghetti code in a systematic way - with a very consistent look to it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/439843",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/219268/"
]
} |
439,878 | When we have a math problem such as 3 + 5 + 2 , we say that it is associative. We can choose which step to pick first: 3 + (5 + 2) ; we know that brackets affect the order in which the operations are performed. I've learned that function composition is a binary operation and it is also associative . They say, function composition is the scenario where an output of one function is used as an input of another; like method chaining. The problem is that I am struggling to imagine how to combine two functions together. I've seen online a function called combine that takes two functions as arguments and then returns the third function that just calls these two functions one after another; but that doesn't affect anything at all. It is just an alias, like if it were a + b + c and became a + d, where d = b + c . It doesn't affect anything really. I am not sure what should be even affected in here; obviously it's not the order in which the functions are executed , since the execution itself is not a binary operation. So what is the binary operation in function composition then? What's the difference between a scenario when we compose two functions together and when we don't? | I think "functional composition" tends to be a bit confusing. By "compose" what we mean is piping the output of one function to the input of another. Most modern programming languages have some facility for evaluating expressions, and we are accustomed to seeing composition occur in the form of Sqrt(Add(2, 2)) , where the output of 'Add' forms the input for 'Sqrt'. What's notable about this familiar form of composition is that the operands which form the ultimate input (in this case, a pair of '2's) must also be specified at the same time as the composition. You can use variables in place of literals, but you still have to provide something for the operands, as part of specifying the composition. However, in functional languages, the composition operator allows these two functions to be composed without specifying anything for the operands. The evaluation of AddAndSqrt = (Add ∘ Sqrt) gets the function pointers for both 'Add' and 'Sqrt' (so that these functions are not called in this expression, but instead their addresses are evaluated as function pointers, and then these are provided as operands to the composition operator), and returns a new function pointer, which takes two operands (effectively, the inputs to the 'Add' stage), and when called like so AddAndSqrt(2, 2) , outputs the same result as would Sqrt(Add(2, 2)) . Behind the scenes, the output of the 'Add' stage is arranged so as to be piped to the input of the 'Sqrt' stage. That is what the composition operator does. Now, composition is an associative operator simply because in the expression C(B(A(2, 2))) it doesn't matter whether you pipe A to B (yielding AB) then pipe AB to C (yielding ABC), or pipe B to C (yielding BC) then pipe A to BC (yielding ABC). Or to put it another way, it doesn't matter if you write: Result1 = B(A(2, 2))
Result2 = C(Result1)
OR
Result1 = A(2, 2)
Result2 = C(B(Result1)) In both cases, the chain of calls you end up with is equivalent to C(B(A(2, 2))) . That's all it means for the composition operator to be associative. All "operators" in mathematics have a set of "properties" - like associativity - that concern their behaviour under algebraic rearrangement. That is, concerning whether different kinds of rearrangement within an expression cause the result to change, or whether the result stays the same despite the rearrangement. Has that answered the question? Edit: a number of commentators have pointed out that the standard convention when using the function composition operator ∘ is that the first-applied argument goes on the right. So that the equivalent of C(B(A(x,y))) would be (C ∘ B ∘ A)(x, y) in typical functional languages, and certainly so in general mathematics. However I think that many programmers would readily prefer the idea that the sequencing of operations proceeds in English order left-to-right, so I'm going to leave the main body of the answer as it is. I was also pleased to find that in F#, composition can be done left to right in accordance with my preference, although using a different symbol for the composition operator ( >> ): https://fsharpforfunandprofit.com/posts/function-composition/ So that C(B(A(x,y))) would become (A >> B >> C)(x, y) . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/439878",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/408555/"
]
} |
440,257 | At the moment I'm spending more time planning out a commit than actually writing code when adding a new feature. Less than two hours would be lucky, and sometimes I'd spend a good part of the day without writing any code. This is making me unhappy, since I don't feel I'm productive enough (I'm living with my parents, and have never been employed as a programmer). If I don't do this amount of planning, I just end up writing code that will have to be undone before I commit, and this just messes up my project, because I don't like wasting any code I've already written and try to recycle it as much as possible (my precious). Someone said that programming isn't about how fast you can type; it's about how fast you can think. I'm not very good at thinking fast. I think I'm overly cautious making my productivity not economically viable, but even still it’s far too easy for me to waste a whole lot of time making a mess of my codebase. Travis asked Can you explain what you mean by "planning out a commit"? I guess there's the time spent architecting, i.e., planning out the object hierarchy, which thread will do the work, GPU or CPU, planning polymorphism for m to n relationships, and which asynchronous pattern should I use. Then there are implementation details and parameter choice for scientific computations. I like to think about how I could iterate, so if I get a bad result it is rectifiable. Breaking down a feature into a series of behaviours, which you can verify correctness at each step. I suppose I think about how to verify correctness of intermediate steps a lot before I've even started. Why does the code you write have to be undone before you commit? Well, it's easy to write code that's unmaintainable, and difficult to debug. So I have to backtrack and write something more structured. I also sometimes overlook some detail that makes my first attempt not viable. "Planning commits" is just what I came up with to communicate when you've finished one feature and are moving on to the next (obviously committing your changes first). You've got no Git changes and haven't yet written any code committing you down one path. There's one big commit that gets the scaffold in place and need lots of planning, and followed by a couple of smaller ones that don't need any planning. So maybe the commits in question are more like new branches. (It's just that I don't use Git branches.) | Firstly: when coding for a living, especially as a junior in a team, typically not much design work is needed. This is because you'll be working in an existing code base. Chances are, you'll often be working on a feature that is similar to existing features, so you can look at those as an example. This is nowhere near as boring as it may sound; you'll be learning plenty of things, still need to understand the examples and adapt them to suit your needs. Designing something new can indeed be much harder, but it is also much rarer, especially when you're a junior developer. That is to say: in a typical junior-level development job, I don't think you'll run into the things you're worried about. Or at least to a far smaller extent. If I don't do this amount of planning, I just end up writing code that will have to be undone I would argue that for most developers, this is fairly normal, if they are developing something that is novel (to them and their existing code base). I do that planning sometimes, and when I start writing the code, I often realise that the plan translates to an awkward implementation that feels forced or over-engineered. ... which is good to know! Coding provides feedback on the plan. Planning and coding are very much iterative processes. You plan a bit, try to create some code with the planning in mind, which makes you realise that you overlooked something during planning, so you adjust the plan, code some more, rinse, repeat. I like to think that this is also how artists work. It's a creative journey. Often messy, sometimes boring, sometimes exhilarating, sometimes frustrating. Sometimes you end up with something boring that just works, sometimes with something beautiful that doesn't work. And, every now and then, with something that works and is beautifully elegant. [I] don't like wasting any code I've already written and try to recycle it as much as possible (my precious). Throwing code away is fine! The code has already served a useful purpose: it provided feedback on the planning. It helped you iterate. Someone said that programming isn't about how fast you can type, it's about how fast you can think. I'm not very good at thinking fast. Neither are most people, especially if they have to do the thinking without seeing any code. Coding helps to make things concrete. It may also obfuscate the bigger picture, so zooming in (code) and out (planning) is part of the iterative process. It's also worth noting that people, after years of professional experience, develop a kind of muscle memory for specific approaches, and an instinct for applying these. Which is half the battle when making something new. You cannot be expected to have that already (nor should you expect it from yourself). Put differently, the examples in existing code bases that I mentioned at the top of this answer are in their heads now, and they can apply them in new projects as well. I think I'm overly cautious making my productivity not economically viable, but even still its far too easy for me to waste a whole lot of time making a mess of my codebase. I think you're being too hard on yourself (I can relate). Try to change your mindset to allow the iterative process, to embrace the creative journey. And code that just works (and is readable) is good enough. If later on, new requirements or new features means that the code is no longer good enough, you can adjust it. That adjustability is the reason the world moved away from specialised hardware and embraced software (running on generic hardware). Edit: Also, small steps are good, and focussing on making it work first, and only then making it right is also good. If you already have working code, the planning gets easier because there are fewer unknowns an hypotheticals. Relevant quote is relevant: “First make it work, then make it right…” — Kent Beck …in the smallest steps you can manage. — Uncle Bob Martin ( source ) As always, everything is more nuanced in practice, and I don't completely let go of design when focussing on making it work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440257",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/366664/"
]
} |
440,346 | I have a program that runs on command-line, let's call it myprogram 1.0.1. It's published on GitHub. Now I discovered that name already exist for a well-know software, so I want to change the name from myprogram to myprog . This, of course, will break the old usage of command since the user now must type myprog and not myprogram anymore. The code remains the same. Any suggestion? | I think myprogram needs to release 1.1.0 which supports the myprog alias. If the user invokes myprogram then it should present a notice/warning to the programmer that this name will be deprecated in the next major version release. Upon release of myprog 2.0.0, myprogram should no longer work. The release of 2.0.0 could be nothing more than a name change. This will help to make the transition easier for developers since they have to worry about just a single compatibility-breaking change. An alternative route is to fork myprogram into myprog and issue an abandonment notice like PHPExcel did; https://github.com/PHPOffice/PHPExcel Whether or not your software rename constitutes a bump down to 1.0.0 instead of 2.0.0 is not a choice I am familiar with. Regardless, I don't think versioning is going to be the big stumbling block but rather the name change itself. It sounds like a headache especially if people come across old tutorials for myprogram and are not aware of the name change. Aliasing example in PHP: <?php
class myprogram
{
function __construct()
{
trigger_error( 'myprogram is being renamed to myprog in v2.0.0. Please consider switching to myprog today.', E_USER_NOTICE );
}
}
class myprog extends myprogram
{
function __construct()
{
// empty to avoid calling myprogram's constructor
}
}
$myprogram = new myprogram(); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440346",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/281704/"
]
} |
440,430 | I was reading a bit about garbage collectors and I am wondering if the garbage collector of a program scans the entire heap memory or what is allocated to it? If it reads the entire system memory, does it mean it is reading memory locations that are used by other applications? I understand that this does not make much sense security or performance wise. If garbage collector only reads the memory that is allocated to it, how does it mark those areas? Sorry for the rookie question, I am not a software engineer and this is pure out of my curiosity | I was reading a bit about garbage collectors and I am wondering if the garbage collector of a program scans the entire heap memory or what is allocated to it? That depends on the garbage collector. There are many different kinds of garbage collectors. For example, Reference Counting Garbage Collectors don't "scan" anything at all! In a Reference Counting Garbage Collector, the system counts references to objects, something like this: SomeObject foo = new SomeObject(); Let's say, this new object was allocated at memory address 42. The GC records "there is 1 reference to the object at address 42". SomeObject bar = foo; Now, the GC records "there are 2 references to the object at address 42". foo = null; Now, the GC records "there is 1 reference to the object at address 42". bar = null; Now, the GC says "there are 0 references to the object at address 42, therefore, I can collect it". At no point did the GC "scan" anything. What you are probably thinking about is an extremely simplistic implementation of a so-called "Tracing Garbage Collector", namely the Mark-and-Sweep GC. Any Tracing GC starts off with a set of objects that they know are always reachable. This is called the root set . The root set typically includes all global variables, the local variables, CPU registers, the stack, and some other stuff. For all of these objects, the GC looks at the instance variables and checks the objects that the instance variables point to. Then it checks those objects' instance variables, and so on and so forth. This way, the GC "sees" all "live" objects, i.e. the objects that are reachable from the root set. What the GC does with those "live" objects depends on the kind of GC. As I mentioned above, what you are thinking of is the most simplistic kind of Tracing GC, which is the Mark-and-Sweep GC. During the tracing phase I described above, the GC will "mark" all live objects by either setting a flag directly in the object header itself, or in a separate data structure. Then, the GC will indeed "scan" the entire memory and find all objects and do one of two things: If the object is marked, remove the mark. If the object is unmarked, de-allocate the object. After this "sweep" phase, you end up with all unreachable objects destroyed and all live objects unmarked, ready for the next "mark" phase. But, as I mentioned, this is only one of many different kinds of Tracing GCs, and is a very simple one with many disadvantages. The two major disadvantages are that scanning the entire memory is expensive and leaving the live objects where they are and only collecting the dead objects in between leads to memory fragmentation. Another very simple but much faster Tracing GC is Henry Baker's Semi-Space GC . The Semi-Space GC "wastes" half of the available memory, but gains a lot of performance for it. The way the Semi-Space GC works is that it divides the available memory into two halves, let's call them A and B. Only one of the two halves is active at any one time, meaning new objects only get allocated in one of the two halves. We start out with half A: The GC "traces" the "live" objects just as described above, but instead of "marking" them, it copies them to half B . That way, once the "tracing" phase is done, there is no need to scan the entire memory anymore. We know that all live objects are in half B, so we can simply forget about half A completely. From now on, all new objects are allocated in half B. Until the next garbage collection cycle, when all live objects are copied to half A, and we forget about half B. These are just two examples of Tracing GCs, and only one of those two scans the entire memory. If it reads the entire system memory, does it mean it is reading memory locations that are used by other applications? I understand that this does not make much sense security or performance wise. This is simply impossible. No modern Operating System allows a process to read another process's memory. (And when I say "modern", I mean "since the 1960s or so".) But even if it were possible, it would not make sense. If the memory belongs to another process, then the GC has no idea what the objects that are in this memory even look like. But it needs to know what the objects look like in order to find all the instance variables and to know how to interpret those references. If it uses an internal marker flag inside the object itself, it also needs to know how to find that marker flag and how to set it. And that is assuming that the marker flag is even there at all! What happens if the application that owns that memory doesn't use marker flags? Or, worse: what happens if the application that owns that memory does use a GC which uses marker flags. Now, the one GC is overwriting the other GC's markers! If garbage collector only reads the memory that is allocated to it, how does it mark those areas? There are two popular approaches. The first approach is that there is flag in the object header of each object reserved for marking. During the "mark" phase, the GC sets this flag. The major advantage of this approach is that there is no separate bookkeeping involved and it is thus very simple: the mark is right there on the object itself. The major disadvantage is that objects are scattered all through the memory, and thus during the marking phase, the GC writes all over the entire memory. This means that there are "dirty" pages all over memory, in a multiprocessor system (which almost all systems are nowadays) this means that we have to notify the other CPU cores that we have modified some memory, we have polluted the cache with tons of writes that we will never need again, and so on. The alternative is to keep a separate data structure where we keep a table of all marked objects. This has the disadvantage of more bookkeeping (we need to keep a relationship between the mark table and the objects) but it has the major advantage that we are only writing to one place in memory, which means we can keep this one piece of data in the cache all the time. But again, not all GCs even have a concept of "marking" at all. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440430",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/384551/"
]
} |
440,591 | I have seen many open source projects being labelled as "not production ready" because they have not reached a major version e.g. 1.0.0 using semver. What is the significance of reaching this milestone? Is there a criteria that must be met for a piece of software to be considered a major version? Or is it arbitrarily decided by the authors of the software? | There is a special difference between 0.0.0 and 1.0.0. Let's dig into the Semantic Versioning standards. The following rules label these numbers as x.y.z: When x is 0 Chaos rules. Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable. Semantic Versioning 2.0.0 When x is greater than 0 Things start meaning things. Version 1.0.0 defines the public API. The way in which the version number is incremented after this release is dependent on this public API and how it changes. Patch version Z (x.y.Z | x > 0) MUST be incremented if only backwards compatible bug fixes are introduced. A bug fix is defined as an internal change that fixes incorrect behavior. Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. It MAY be incremented if substantial new functionality or improvements are introduced within the private code. It MAY include patch level changes. Patch version MUST be reset to 0 when minor version is incremented. Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. It MAY also include minor and patch level changes. Patch and minor versions MUST be reset to 0 when major version is incremented. Semantic Versioning 2.0.0 So yes there is a magical difference between x going from 0 to 1 vs going from 1 to 2. People are funny about zero. Of course many projects that still use 0 as their major are stable. The point is you weren't promised that stability by this version number. Not everything with a version number uses semantic versioning and may not follow these rules. Read the documentation before making assumptions. Examples: Web 2.0, Super Bowl XXX, Star Wars Episode V. And just to round out this “gigantic landscape” I give you: Mostly Harmless "The fifth book in the increasingly inaccurately named Hitchhikers Trilogy". Sorry if I gave the impression that semantic versioning was universal. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440591",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/245057/"
]
} |
440,736 | I have an inventory of products stored in Postgres. I need to be able to take a CSV file and get a list of changes—the things in the CSV file that are different to what is in the database. The CSV file has about 1.6 million rows. The naive approach is to simply take each row, retrieve that product from the database using the key field, make the comparison, emit the changes (including updating the database), then move on to the next row. However, that many round trips causes the whole process to take a long time (upwards of two minutes). I've tried locally caching the inventory in an off-heap map (using MapDB), which improved the performance a lot, since I only needed to hit the database to write changed data, but I didn't figure out a way to make that scale. There will be many inventories for different customers. Perhaps some kind of sharding approach would be needed, but then I have to deal with nodes going on- and offline. Maybe Akka Cluster could help here too. Are there some good approaches that I'm overlooking? | Since the roundtrip seems to be the issue, you could: either opt for a local solution, with the scaling issue you mentioned. (You could still try to split the task across several local nodes, each responsible of a subrange of the index space). or opt to the db solution, bulk uploading your csv in a temporary table, and let the db server work very efficiently on (indexed) tables. The benefit of this approach is that you’d reach the scalability of the db itself. You could fine tune the approach for any distribution scheme that woukd already be in place, if it’s already a distributed database. Some more thoughts: if you have many columns/fields to compare, you may consider adding a hash code on each row, in the csv as well as in the db (updated at every row change). The hash code would be calculated using the fields that are relevant for the comparison. Finding the diff is then reduced to finding the new rows and the existing rows with a difference on the hash. ultimately, it would be more efficient to handle the problem at the source, i.e intercepting the events that would cause the csv to change, or using some kind of timestamp of the last change. But ok, this is not always possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440736",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54726/"
]
} |
440,816 | C++ features deterministic order of calling destructors up the call stack until the exception is handled somewhere. To my knowledge, neither Java nor (object oriented) Python provide this. In those languages, how is the following situation handled idiomatically: run the program for some time, allocating resources (memory, sockets, files) in the constructors (we're doing OOP, right?) throw in some function/method handle the exception some levels up the call stack continue normal operation fully recovering from the error? We've lost the handles to those resources. They are effective leaked now, no? | Both of the programming languages you mention (as well as many other programming languages) provide Automatic Memory Management . What this means is that the programming language is responsible for allocating and de-allocating memory, managing free memory, and so on. So, that solves the problem for the first kind of resource you mentioned: memory. Before you run out of memory, the programming language will de-allocate some unreachable objects (assuming there are any), thus freeing memory again. For other kinds of resources, there are essentially three different strategies which are employed, and in fact, many programming languages employ at least two of them. The first strategy is library-based and relies on a programming language feature typically called finalizers or destructors . Finalizers are a piece of code that gets executed when an object is de-allocated. Usually, programming languages with automatic memory management will not allow you to call the OS kernel directly; rather, there will be some sort of proxy object which wraps and represents resources, such as IO objects representing file descriptors, Socket objects representing network sockets, and so on. The library developers will make sure that any object representing a resource will have a finalizer which releases that resource. Therefore, whenever an object representing a resource gets de-allocated, the corresponding resource gets released. The main problem with this strategy is that most programming languages with automatic memory management do not make any guarantees about when memory will be de-allocated or even if it will be de-allocated at all. Usually, it is more efficient to "waste" a bit of memory and batch the de-allocation operations together at a point where the system is otherwise idle. Therefore, on a system with a lot of memory but only a small number of file descriptors, for example, it would be possible that you run out of file descriptors before you run out of memory (which would trigger a de-allocation which would trigger execution of the finalizers which would then release file descriptors). For that reason, this strategy is typically only employed as a fallback and one of the two other strategies below is also used. However, there are some programming languages where memory is guaranteed to be de-allocated as soon as it is no longer used, e.g. Swift. The second strategy is also library-based, and is to provide helper methods that make it easy to write code that correctly handles the situation described in your question. Typically, these helper methods require programming language support for first-class subroutines and higher-order subroutines , i.e. subroutines that can be passed as arguments and subroutines that can take subroutines as arguments. For example, in Ruby, there is the IO::open method, whose implementation looks a little bit like this (massively simplified): class IO
def self.open(file_descriptor)
file = new(file_descriptor)
yield file # call the supplied block with `file` as argument
ensure # regardless of whether or not an exception was raised
file.close # close the file descriptor
end
end And you would use it like this: IO.open(some_file_descriptor) do |f|
f.puts("Hello")
something_which_might_raise_an_exception
f.puts("World")
end Regardless of whether the IO::open method was exited because the block completed normally or because something in the block raised an exception, the ensure part of the method will be executed and thus the file descriptor will be closed. You could do the same in Python or in Java: class IO {
public static void open(int fileDescriptor, Consumer<IO> action) {
try {
var file = new IO(fileDescriptor);
action(file);
} finally {
file.close();
}
}
} And you would use it like this: IO.open(
f -> {
f.println("Hello");
somethingWhichMightThrowAnException();
f.println("World");
}
); However, the Python and Java designers decided not to include such helper methods in the standard library. The third strategy is to add specialized language features that essentially do the same as the above. Python has the with statement which works together with the Context Manager protocol , Java has the try -with-resources statement which works together with the AutoCloseable interface , and C# has the using statement which works together with the IDisposable interface and the IAsyncDisposable interface . Using these looks a bit like this: with File.open("hello.txt") as f:
f.write("Hello")
something_which_might_raise_an_exception()
f.write("World") Both of these latter strategies have the problem that there is nothing which forces the programmer to use the feature. For example, in Ruby, there is a second overload of IO::open which does not take a block but instead returns an IO object wrapping an open file descriptor. There is nothing stopping me from never calling close on that object. If and when it gets automatically de-allocated, its finalizer will release the file descriptor, but until then, the file descriptor is effectively leaked. However, that is no different in C++: If I write my own File class and don't call close in the destructor, there's nothing in the language which stops me. A completely different approach can be taken in programming languages with a powerful and expressive type system. In such languages, it is possible to express the lifetime rules of resources inside the type system and thus ensure that code which can leak resources gets rejected by the type checker. I believe Idris employs this strategy, for example. In some languages, there is a separate Effect System aside from the type system. This can also be used to manage resources. Last but not least, there are languages like Smalltalk and Common Lisp, where exceptions are resumable , i.e. they do not unwind the stack in the first place. You can fix the problem and continue at the place where the exception occurred. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440816",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54268/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.