text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. student) { this.batchName = student.batchName; this.averageBatchPsp = student.averageBatchPsp; this.name = student.name; this.age = student.age; this.psp = student.psp; } // creation of getter setter public String getName() { return name; } public void setName(String name) { this.name = name; }
medium
8,073
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. public int getAge() { return age; } public void setAge(int age) { this.age = age; } public double getPsp() { return psp; } public double getAverageBatchPsp() { return averageBatchPsp; } public void setAverageBatchPsp(double averageBatchPsp) { this.averageBatchPsp = averageBatchPsp; } public String
medium
8,074
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. getBatchName() { return batchName; } public void setBatchName(String batchName) { this.batchName = batchName; } @Override public Student clone() { return new Student(this); } } Creating a student registry class import java.util.HashMap; import java.util.Map; public class StudentRegistry {
medium
8,075
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. Map<String, Student> map = new HashMap<>(); Student get(String key) { return map.get(key); } void register(String key, Student st) { map.put(key, st); } } Now Let’s create another intelligent student file. package prototype; public class InteligentStudent extends Student { int IQ;
medium
8,076
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. InteligentStudent() { } InteligentStudent(InteligentStudent student) { super(student); this.IQ = student.IQ; } @Override public InteligentStudent clone() { return new InteligentStudent(this); } } Let’s create Final Client File. public class Client { public static void fillRegistry(StudentRegistry
medium
8,077
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. registry) { Student julyBatch = new Student(); julyBatch.setBatchName("july22"); julyBatch.setAverageBatchPsp(90); registry.register("july22", julyBatch); Student augBatch = new Student(); augBatch.setBatchName("aug22"); augBatch.setAverageBatchPsp(98); registry.register("aug22", augBatch);
medium
8,078
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. InteligentStudent septBatch = new InteligentStudent(); septBatch.setBatchName("sept22"); //augBatch.setAverageBatchPsp(98); septBatch.IQ = 100; registry.register("sept22", septBatch); } public static void main(String[] args) { StudentRegistry registry = new StudentRegistry();
medium
8,079
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. fillRegistry(registry); Student saketh = registry.get("july22").clone(); saketh.setName("saketh"); //saketh.setPsp(90); saketh.setAge(30); Student suyash = registry.get("july22").clone(); suyash.setName("suyash"); //suyash.setPsp(90); suyash.setAge(30); Student swraj =
medium
8,080
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. registry.get("aug22").clone(); swraj.setName("swraj"); //swraj.setPsp(90); swraj.setAge(30); Student sneha = registry.get("sept22").clone(); swraj.setName("sneha"); //swraj.setPsp(90); swraj.setAge(30); System.out.println("Debug"); } } Classes of the Prototype Design Prototype This is the actual
medium
8,081
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. object’s prototype. 2. Prototype registry Prototype registry is used as a service of registry that keeps all prototypes accessible with the use of simple string parameters. 3. Client Registry service will be used by clients in order to to access prototype instances. These above examples are for
medium
8,082
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. Java Language. Now Let’’s jump on Dart.| How we will use in our Flutter Framework 🙂 ? Case 1: Cloning mutable objects We will create a copy of a mutable object that is not able to clone itself in the Dart language: class Point { int x; int y; Point([this.x, this.y]); } final p1 = Point(5, 8); final
medium
8,083
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. p2 = Point(p1.x, p1.y); final p3 = Point() ..x = p1.x ..y = p1.y; Point class has two public arguments x and y Both are mutuable. With such a small, simple class, it's trivial to produce copies of p1 either with the class's constructor or by setting the properties on an uninitialized new object
medium
8,084
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. with Dart's cascade operator (..). This is not recommended.. But why ?? 🧐🧐 ** Our app code is now tightly coupled to the Point class means we must know of its inner workings of Point class to produce a copy. ** Any changes to Point class possibly changes in many places this may cause error prone
medium
8,085
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. scenaio. The Prototype pattern dictates that objects should be responsible for their own cloning, like so: class Point { int x; int y; Point([this.x, this.y]); Point clone() => Point(x, y); } final p1 = Point(5, 8); final p2 = p1.clone(); This code is much cleaner, and now the app code won’t need
medium
8,086
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. to be changed even if Point gets new or different properties in the future, as clone() will always return a new instance of Point with the same values. Case 2: Cloning immutable objects class Point { final int x; final int y; const Point(this.x, this.y); Point clone() => Point(x, y); } final p1 =
medium
8,087
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. Point(5, 8); final p2 = p1.clone(); In the above code, the constructor parameters are not optional, and the class’s member variables can’t be updated once initialized. This does not affect our ability to make clones. However, this class does not have a good way to modify only one or the other of
medium
8,088
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. the properties. We can see that adding a copyWith() method gives us more flexibility with immutable objects: class Point { final int x; final int y; const Point(this.x, this.y); Point copyWith({int x, int y}) { return Point( x ?? this.x, y ?? this.y, ); } Point clone() => copyWith(x: x, y: y); }
medium
8,089
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. final p1 = Point(5, 8); final p2 = p1.clone(); Here, the copyWith() method allows you to create a new Point from an existing. Theclone() method can use it to produce a full object copy, preventing us from having to define a separate process for cloning. Conclusion The Prototype pattern is used in
medium
8,090
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. the Flutter framework, particularly when manipulating themes, sessions. Basic summary is that an object itself is in the best position to produce its own clones, having full access to all its properties and internal workings. The pattern also keeps loose coupling. So This is all about Prototype
medium
8,091
Prototype Design Pattern, Prototype Design, Flutter, Flutter App Development, Dark Design Patterns. Registry pattern. Clap 👏 If this article helps you. See you all again soon! References Prototype Prototype is a creational design pattern that lets you copy existing objects without making your code dependent on…refactoring.guru GitHub - Anonymousgaurav/design_patterns at
medium
8,092
Data Science, Causality, Statistics, Advertising, Python. Adam Kelleher and Amit Sharma How long should an online article title be? There’s a blog here citing an old post from 2013 which shows a nice plot for average click-through rate (CTR) and title length. Looking at this plot, we might suggest a policy intervention that all titles should be 16–18
medium
8,094
Data Science, Causality, Statistics, Advertising, Python. words long! But wait ,there’s a nuance here. Was this data from observation or from a randomized experiment? Did someone randomly adjust the lengths of titles, or whoever prefers to write 16–18 word titles wrote 16–18 word titles? Maybe those authors also happen to be good at writing articles? As
medium
8,095
Data Science, Causality, Statistics, Advertising, Python. it turns out, this plot is based on observational data. If so, then we really should control for the author when we make a plot like this. This is because all of the observed effect could be due to the authors’ skill and not just the article’s title length. How do we account for that? To start,
medium
8,096
Data Science, Causality, Statistics, Advertising, Python. note that the above is a plot of E[Y|X=x], or the average value of Y at x, where Y is the CTR and X is the title length. What we really want is an estimator for E[Y|do(X=x)] where do(X=x) refers to the action of actively changing X to a specific value . That is, we generate a similar plot, but
medium
8,097
Data Science, Causality, Statistics, Advertising, Python. where we intervene to make all authors write titles of length x (and do the same hypothetical intervention at each x). With the same authors, any trend that we see in the plot then has to be due to the length of articles’ title and thus is a causal trend. Our question, therefore, lies in the realm
medium
8,098
Data Science, Causality, Statistics, Advertising, Python. of causal inference. However, most techniques for causal inference [1] are designed to estimate more specific things, such as the effect of changing the title to be one word longer, E[Y|do(X=1)] — E[Y|do(X=0)]. Why can’t we just make a causal version of the plot?! Wouldn’t it be great if we could
medium
8,099
Data Science, Causality, Statistics, Advertising, Python. generate the same data we used for this plot from our observational data, but make it causal? With modern causal inference approaches, we can! This post is about a toolkit that makes it easy to draw causal plots. There’s a reason why we do not see such causal plots in typical data science posts:
medium
8,100
Data Science, Causality, Statistics, Advertising, Python. causal inference is hard! First, it requires immense statistical knowledge and expertise to estimate a single causal effect, let alone a causal version of a plot like above. Second, the software for estimating causal effects typically requires specialized frameworks that do not integrate well with
medium
8,101
Data Science, Causality, Statistics, Advertising, Python. common data science practices. For the first, we utilize a recently released python library for causal inference, DoWhy that abstracts causal inference in four steps and guides non-experts towards deriving the desired causal estimate. For the second, we introduce a new API for causal inference that
medium
8,102
Data Science, Causality, Statistics, Advertising, Python. integrates directly with pandas.DataFrame, one of the most popular tools for data analysis. Our motivation is that you don’t have to move outside of your standard data science workflow to do causal inference. Instead of calling df.plot(x=”X”, y=”Y”), you can call df.causal.do(…).plot(x=”X”, y=”Y”)!
medium
8,103
Data Science, Causality, Statistics, Advertising, Python. We call this the causal data frame. We’ll run through a quick example, but first let us give a little more context on the problem of causal inference. The promise of Pearlian causal inference Pearlian causal graphs [2] have fundamentally changed the we frame causal inference problems, and have
medium
8,104
Data Science, Causality, Statistics, Advertising, Python. lately been changing the process of causal inference itself. Especially when it comes to software, they give an essential simplicity that lends itself to good abstraction. DoWhy, a Python package authored by Amit Sharma and Emre Kıcıman from Microsoft, aims to realize that potential. It’s built
medium
8,105
Data Science, Causality, Statistics, Advertising, Python. with causal models as a fundamental data structure. It uses these to explicitly specify our assumptions (“what we know”), build an inference plan (“what to estimate”), and to construct estimation methods (“how to estimate”). Afterwards, this framework provides natural methods for checking
medium
8,106
Data Science, Causality, Statistics, Advertising, Python. robustness of the estimated effect and model criticism. Thus, DoWhy provides a great environment in which to take causal inference a step farther. All of the hard parts of causal inference are abstracted, and act as tools with which we can build higher-level features. This includes the concept of a
medium
8,107
Data Science, Causality, Statistics, Advertising, Python. causal graph on which we can use logical rules [3] to decide if we’ve controlled all confounders, and refuter methods which test the assumptions we take in our inference approach. With these in hand, it’s easy to alert the user when assumptions break down using Python’s logging capabilities and
medium
8,108
Data Science, Causality, Statistics, Advertising, Python. prevent mistakes in estimating the causal effect. Back to our example, we can illustrate the difference between observational and interventional data on title length using the two causal graphs shown in Figure 1. Figure 1: The difference between the observed and interventional distributions, as
medium
8,109
Data Science, Causality, Statistics, Advertising, Python. shown by the two causal graphs. Building on DoWhy and an early implementation in Adam’s causality package, we decided to build a more general class of causal effect estimators. While classic causal inference often focuses on binary causal states, and estimation of contrasts of the form Pearlian
medium
8,110
Data Science, Causality, Statistics, Advertising, Python. causal inference focuses on estimating far more general quantities, like the distribution P(Y|do(X=x)). This works, in theory, even when X and Y are multivariate, and with mixed data types! This was the starting place for the do-sampler. If we could generate samples from this distribution, we could
medium
8,111
Data Science, Causality, Statistics, Advertising, Python. compute statistics of those samples, even plots like the one we started the article with! The Do-Sampler: An example Let’s keep the motivation going with a simple example, and see the do-sampler in action. Let’s solve our problem above by constructing a data set where the observational result is
medium
8,112
Data Science, Causality, Statistics, Advertising, Python. similar to the plot above (full notebook here, if you’d rather get to the point). Specifically, let’s have an author who prefers a narrow range of title lengths between around 12 to 18 words long, and who is twice as good as the average person at writing titles. Other authors tend to write random
medium
8,113
Data Science, Causality, Statistics, Advertising, Python. length titles, ranging from around 3 to 25 words long. Naturally, the 12 to 18 word titles will perform better on average, and we get a graph that looks like the figure below. Even though there’s no causal relationship between title length and click-through rate, there’s statistical dependence!
medium
8,114
Data Science, Causality, Statistics, Advertising, Python. Again, that’s because a person who writes 12 to 18 word titles tends to also write titles that are clicked on more. Intuitively, if we control [4] for the author, the relationship between click-through rate and title length should go away. We can do that by calling the do-sampler to produce a
medium
8,115
Data Science, Causality, Statistics, Advertising, Python. random sample from the interventional distribution. That is, the distribution of click through rates conditional on a policy that sets the title length to a specific length for all authors. To do so, the do-sampler requires us to specify the cause (‘title_length’), the outcome
medium
8,116
Data Science, Causality, Statistics, Advertising, Python. (‘click_through_rate’), and a list of common causes that we believe can confound the relationship between the cause and outcome. In our case, let us assume that the ‘author’ is the only confounder for the effect of titles. Then, we pick a method (‘weighting’ or importance sampling as described
medium
8,117
Data Science, Causality, Statistics, Advertising, Python. below), and specify the variable types (here “d”, meaning “discrete”, and “c”, meaning “continuous”). The package does the rest! The result of this is a new pandas.DataFrame. The simplicity of the do-sampler is that you can effectively manipulate the interventional distribution just as you would a
medium
8,118
Data Science, Causality, Statistics, Advertising, Python. dataframe! To make a similar plot as above, you can run any plotting methods you like, like the pandas native version or the seaborn version. Whatever your preferred method, you will get a plot that looks like the one below. The relationship between click-through rate and title length is gone! In
medium
8,119
Data Science, Causality, Statistics, Advertising, Python. particular, the big bump around 12–18 words due to confounding by “author” went away. The expected value of click-through rate doesn’t change with the length of the title, and now encourages us to think of other ways of improving CTR. This is a powerful approach! In contrast, most causal inference
medium
8,120
Data Science, Causality, Statistics, Advertising, Python. methods are built around estimating a parameter of a model. The simplest version of this is estimating the coefficient of a linear regression model . The coefficient, through a happy accident of the model specification, ends up being an estimator for E[Y|do(X=1)]-E[Y|do(X=0)]. This approach can be
medium
8,121
Data Science, Causality, Statistics, Advertising, Python. very limiting: you might be able to identify a difference in average outcomes between a control and test group, but not a difference in median outcomes, or a difference in variances, etc. The do-sampler presents a completely different and extensible approach to causal inference. We take advantage
medium
8,122
Data Science, Causality, Statistics, Advertising, Python. of a fundamental realization of Pearlian causal inference: that we can identify the interventional joint distribution, P(Y|do(X=x)), and from that we can compute any statistic we like! The Do-Sampler: Why it works The core problem is that it’s really hard to estimate probability distributions. This
medium
8,123
Data Science, Causality, Statistics, Advertising, Python. makes good sense: conditional distributions are multivariate functions whereas contrasts, the focus of many classic estimators, are just a single parameter. Instead, we turn to generating samples from these distributions that allows us to compute any quantity with those samples as we could with our
medium
8,124
Data Science, Causality, Statistics, Advertising, Python. original data set! That’s the key idea that lets us do things like plot E[Y|do(x)] vs. x in the example above. As it happens, there are some cool tricks for doing this sampling process! One simple trick is inverse propensity weighting. The intuition is that we weight each input data point inversely
medium
8,125
Data Science, Causality, Statistics, Advertising, Python. to its probability of receiving a particular treatment x. We can calculate a new average CTR at a fixed title length with this weighting scheme, where under-represented points (ones with lower propensity) get up-weighted (by the inverse propensity!). In the case of our article titles, we’d look at
medium
8,126
Data Science, Causality, Statistics, Advertising, Python. how likely different titles lengths (X) are for each author (Z). For fixed title length, we’d count that author’s CTRs (Y) more times toward the average CTR at that title length if they produced fewer titles at that length. This over-counting balances the overall average out to what it would have
medium
8,127
Data Science, Causality, Statistics, Advertising, Python. been if we forced everyone to write titles of that length from the beginning! If you weight your data using these inverse propensity weights, you can compute the average of the outcomes E[Y|do(X)], as if the outcomes were generated by P(Y|do(X=x)). The above is true whenever we observe all
medium
8,128
Data Science, Causality, Statistics, Advertising, Python. confounders for the effect of X on Y. The math works like this: we can write the expectation of some function of Y (for us, the CTR) under the interventional distribution like giving the familiar propensity score in the denominator. Now, we can add an estimator for P(Y, X, Z) using the counts of
medium
8,129
Data Science, Causality, Statistics, Advertising, Python. data points with values Y, X, Z as N(Y, X, Z), and the overall count of data points as N. We get, and in a slight abuse of notation, we can re-write this count as, which becomes, which is our final estimator for E[f(Y)|do(X)]. This is just a weighted average of f(Y) over the data! You can get the
medium
8,130
Data Science, Causality, Statistics, Advertising, Python. same result instead with a re-sampling process, where each data point is sampled with inverse propensity weights, 1/P(X=xi|Z=zi). That means we can compute the expectations of arbitrary functions by computing them over a weighted random sample of the data, and taking a simple average of the
medium
8,131
Data Science, Causality, Statistics, Advertising, Python. resulting values. This is just the same as sampling from P(Y|do(X))! We can extend the logic to compute arbitrary aggregations on these weighted random samples. In the dowhy package, we implement the do-sampler using three different methods: simple weighting, kernel density estimation, and monte
medium
8,132
Data Science, Causality, Statistics, Advertising, Python. carlo methods. The do-sampler supports both continuous and discrete variables. The Usual Caveats As with any causal inference method, the do-sampler also relies on modeling assumptions. Estimating the above equation requires a model for computing the propensity scores, P(X|Z) for each desired
medium
8,133
Data Science, Causality, Statistics, Advertising, Python. causal state X. Further, we assume that we observe all common causes that may confound the effect of X on Y. If the propensity score model is mis-specified or if there are unobserved confounders, you’ll still get biased estimates of whichever statistic you’re estimating. Using non-parametric
medium
8,134
Data Science, Causality, Statistics, Advertising, Python. density estimates does not get around this problem. The weighting sampler, for instance, uses kernel density estimation for continuous causal states. In the tails of empirical distributions, this approach can over-estimate propensities, and so weights that should be very large might end much
medium
8,135
Data Science, Causality, Statistics, Advertising, Python. smaller than their correct values. Similarly as you might constrain weights in a weighting approach to causal inference (to reduce variance), it might make sense to ignore unlikely observed causal states and focus on a conditional treatment effect (CATE) for a subset of the data, or an analogous
medium
8,136
Data Science, Causality, Statistics, Advertising, Python. estimand. The good news is that DoWhy provides robustness tests to check many of these assumptions. Our next step will be to integrate refutations in the do-sampler to make it easier to catch errors in modeling or estimation. Footnotes For an overview of causal inference techniques, check out a
medium
8,137
Data Science, Causality, Statistics, Advertising, Python. tutorial by Emre and Amit: https://causalinference.gitlab.io/kdd-tutorial/ Check the Book of Why for an introduction. Or dive into the Causality book if you are brave enough! These logical rules are based on the do-calculus by Judea Pearl. To be precise, we condition on the author variable here.
medium
8,138
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. Creating perspective is the key for justifying your purchases Photo by Vinicius Benedit on Unsplash Where I live, sushi is expensive. I’m not talking about the sushi you can get from the grocery store, or the half off stuff at the quick-y-mart just after midnight. No, I’m talking about legitimate,
medium
8,140
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. fresh, yummy premium sushi you get from a sushi restaurant, and prepared by an itamae. It’s an indulgence that’s worth it to me once in a while. To someone else, their indulgence might be paying for brie or splurging for premium vodka. But no matter what it is, there are things we spend money on
medium
8,141
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. which have a short lived, one-time use. They pass through our bodies within a relatively short period of time and are gone forever. And yet, when it comes to similarly priced purchasing decisions regarding items which may last decades, we balk at the cost. So, for example, I am helping a client
medium
8,142
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. remodel their bathroom. They have been stuck on purchasing a toilet for two weeks because they are concerned about cost. The average toilet in our market is anywhere from $99-$500 depending on which bells and whistles you want with it. So, there is a 5X spread between the lowest priced toilet and
medium
8,143
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. the most expensive. But realistically, if you purchase a new toilet, you will be sitting on that throne for no less than a decade, and it will be used on average no less than 3 times a day. So, does buying the more expensive one really matter over time? My thought process would be: “If I’m willing
medium
8,144
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. to splurge on a sushi dinner, and the experience is a one time event which is over in minutes and gone forever within 24 hours at an average cost of $100 plus tax and tip, how does it make sense to even worry about the range of costs of the toilet?” I know you don’t need a photo of a toilet….but
medium
8,145
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. here is one anyway! (Photo by Giorgio Trovato on Unsplash) I mean if we broke down the sushi dinner into a cost per use, it would be a direct 1 for 1. $100, for 1 event. If I break down the toilet, the cost per use over a decade at 3 times a day even at $500 is: .045. So less than 5 cents per use.
medium
8,146
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. And realistically, that toilet will probably last longer than 10 years. I used this same technique when I was buying jeans the other day. The jeans were around $75 on sale, and I hesitated to buy them. But then I compared them to a sushi dinner and realized I would be getting something for less
medium
8,147
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. than the cost of one dinner, they would last much longer and be used many times over. I don’t use this technique to justify spending. I use this technique to get perspective on mid-range, multi-use purchases which initially, and irrationally, make me uncomfortable. The sushi dinner comparison helps
medium
8,148
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. me put into perspective items which may have higher price tags, but which should actually be measured by their cost per use over time, and not just the initial outlay. I like this picture. Cost: 0 sushi dinners. (Photo by Nathan Dumlao on Unsplash) Buying a coffee maker which costs $250 might
medium
8,149
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. initially make me cringe, but when I compare the lifetime output of that coffee maker, how long it may last, and how often I will be using it, a cost of 2.5 sushi dinners means the coffee maker is a reasonable purchase to me. Now this obviously doesn't work for bigger purchases like homes and cars,
medium
8,150
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. but it does work when considering things like courses and health related products. My vitamins per month? Just under one sushi dinner. Would I honestly hesitate to spend $100 a month on my health when I would spend that on raw fish for an hour of enjoyment? Why am I losing it at purchasing a course
medium
8,151
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. on social media marketing for $250 which could help me and my business, when that’s realistically only a couple of sushi dinners? In the end, no matter what your ‘sushi dinner’ is, you need to find something to price anchor these kinds of purchases, so you can stop stressing about decisions which
medium
8,152
Life, Personal Finance, Budgeting, Entrepreneurship, Life Lessons. don’t deserve that much of your time. If you are delaying purchasing something for the sake of a few dollars either way, you are wasting time and brain power for no reason. The more important question is why are we so willing to balk at decisions like these over a few dollars here and there, and we
medium
8,153
Python, Physics, Pendulum, Numerical Methods. Illustration of a simple pendulum. Source: https://phet.colorado.edu/sims/html/pendulum-lab/latest/pendulum-lab_en.html Introduction The simple pendulum is an example of a classical oscillating system. Classical harmonic motion and its quantum analogue represent one of the most fundamental physical
medium
8,155
Python, Physics, Pendulum, Numerical Methods. model. The harmonic oscillator model can be used to describe and model phenomena such as heat, molecular and crystal bonding, lattice vibration, electromagnetic waves, vibrational spectroscopy, water waves, shock absorbers, sound waves, acoustics, earthquakes, etc. In this article, we describe 3
medium
8,156
Python, Physics, Pendulum, Numerical Methods. basic methods that can be used for solving the second-order ODE (ordinary differential equation) for a simple harmonic oscillating system. We then implement the 3 basic methods using a python solver. General Formalism We remark here that the 3 methods described above could be extended to include
medium
8,157
Python, Physics, Pendulum, Numerical Methods. other external forces such as damping or frictional forces. Python ODESolver for the Simple Pendulum Inport Necessary Libraries import numpy as np import matplotlib.pyplot as plt ODE Solver class ODESolver(object): """Second-order ODE Solver. Parameters ------------ omega_0 : float initial angular
medium
8,158
Python, Physics, Pendulum, Numerical Methods. velocity theta_0 : float initial angular displacement eta : float time step size n_iter : int number of steps Attributes ----------- time_ : 1d-array Stores time values for each time step. omega_ : 1d-array Stores angular velocity values for each time step. theta_ : 1d-arra Stores angular
medium
8,159
Python, Physics, Pendulum, Numerical Methods. displacement values for each time step. Methods ----------- euler(alpha): Implements the Euler algorithm for the acceleration function alpha. midpoint(alpha): Implements the Midpoint algorithm for the acceleration function alpha. verlet(alpha): Implements the Verlet algorithm for the acceleration
medium
8,160
Python, Physics, Pendulum, Numerical Methods. function alpha. """ def __init__(self, omega_0 = 0, theta_0 = 10, eta=0.01, n_iter=10): self.omega_0 = omega_0 self.theta_0 = theta_0 self.eta = eta self.n_iter = n_iter def euler(self,alpha): """Implements Euler Method. Parameters ---------- alpha : acceleration function Returns ------- self :
medium
8,161
Python, Physics, Pendulum, Numerical Methods. object """ self.time_ = np.zeros(self.n_iter) self.omega_ = np.zeros(self.n_iter) self.theta_ = np.zeros(self.n_iter) self.omega_[0] = self.omega_0 self.theta_[0] = self.theta_0*np.pi/180.0 for i in range(self.n_iter-1): self.time_[i+1] = self.time_[i] + self.eta self.omega_[i+1] = self.omega_[i] +
medium
8,162
Python, Physics, Pendulum, Numerical Methods. self.eta*alpha(self.theta_[i]) self.theta_[i+1] = self.theta_[i] + self.eta*self.omega_[i] return self def midpoint(self,alpha): """Implement Midpoint Method. Parameters ---------- alpha : acceleration function Returns ------- self : object """ self.time_ = np.zeros(self.n_iter) self.omega_ =
medium
8,163
Python, Physics, Pendulum, Numerical Methods. np.zeros(self.n_iter) self.theta_ = np.zeros(self.n_iter) self.omega_[0] = self.omega_0 self.theta_[0] = self.theta_0*np.pi/180.0 for i in range(self.n_iter-1): self.time_[i+1] = self.time_[i] + self.eta self.omega_[i+1] = self.omega_[i] + self.eta*alpha(self.theta_[i]) self.theta_[i+1] =
medium
8,164
Python, Physics, Pendulum, Numerical Methods. self.theta_[i] + 0.5*self.eta*(self.omega_[i]+self.omega_[i+1]) return self def verlet(self,alpha): """Implement Verlet Method. Parameters ---------- alpha : acceleration function Returns ------- self : object """ self.time_ = np.zeros(self.n_iter) self.theta_ = np.zeros(self.n_iter) self.theta_[0]
medium
8,165
Python, Physics, Pendulum, Numerical Methods. = self.theta_0*np.pi/180.0 self.time_[1]= self.eta self.theta_[1] = self.theta_[0]+self.omega_0*self.eta +0.5* (self.eta**2)*alpha(self.theta_[0]) for i in range(self.n_iter-2): self.time_[i+2] = self.time_[i+1] + self.eta self.theta_[i+2] = 2.0*self.theta_[i+1] -self.theta_[i] +
medium
8,166
Python, Physics, Pendulum, Numerical Methods. (self.eta**2)*alpha(self.theta_[i+1]) return self Define Angular Acceleration Function def alpha(x): return -np.sin(x) Example 1: Euler Method time=ODESolver(omega_0 = 0, theta_0 = 10, eta=0.1, n_iter=300).euler(alpha).time_ theta=ODESolver(omega_0 = 0, theta_0 = 10, eta=0.1,
medium
8,167
Python, Physics, Pendulum, Numerical Methods. n_iter=300).euler(alpha).theta_ plt.plot(time,theta*180/np.pi,lw=3,color='red') plt.xlabel('time(s)',size=13) plt.ylabel('angle (deg)',size=13) plt.title('Euler Method',size=13) plt.show() We observe that with a time step of 0.1, the Euler method gives a solution that is not stable. This problem
medium
8,168
Python, Physics, Pendulum, Numerical Methods. can be solved by decreasing the time step to smaller value, for example 0.001. Example 2: Midpoint Method time=ODESolver(omega_0 = 0, theta_0 = 10, eta=0.1, n_iter=300).midpoint(alpha).time_ theta=ODESolver(omega_0 = 0, theta_0 = 10, eta=0.1, n_iter=300).midpoint(alpha).theta_
medium
8,169
Python, Physics, Pendulum, Numerical Methods. plt.plot(time,theta*180/np.pi,lw=3,color='green') plt.xlabel('time(s)',size=13) plt.ylabel('angle (deg)',size=13) plt.title('Midpoint Method',size=13) plt.show() We observe that with a time step of 0.1, the Midpoint method gives a solution that is not stable, but relatively better when compared to
medium
8,170
Python, Physics, Pendulum, Numerical Methods. the Euler method. This problem can be solved by decreasing the time step to smaller value, for example 0.001. Example 3: Verlet Method time=ODESolver(omega_0 = 0, theta_0 = 10, eta=0.1, n_iter=300).verlet(alpha).time_ theta=ODESolver(omega_0 = 0, theta_0 = 10, eta=0.1,
medium
8,171
Python, Physics, Pendulum, Numerical Methods. n_iter=300).verlet(alpha).theta_ plt.plot(time,theta*180/np.pi,lw=3,color='blue') plt.xlabel('time(s)',size=13) plt.ylabel('angle (deg)',size=13) plt.title('Verlet Method',size=13) plt.show() We observe that with a time step of 0.1, the Verlet method gives a reasonable solution that is stable.
medium
8,172
Python, Physics, Pendulum, Numerical Methods. Because the Verlet method is based on the centered derivative while Euler and Midpoint uses the forward derivative, the error in the Verlet method is quite minimal. Summary In summary, we’ve shown how a python object can be built for implementing the 3 basic methods for solving second-order ODE’s.
medium
8,173
Python, Physics, Pendulum, Numerical Methods. We also provided some sample outputs from the code developed. Based on this analysis, we observe that the Verlet method is computationally the most efficient method since it uses the centered derivative which is a more symmetric definition of a derivative compared to the forward derivative.
medium
8,174
Machine Learning, Deep Learning, Data Science, Artificial Intelligence, Python. In this post we are going to explore RNN’s and LSTM Recurrent Neural Networks are the first of its kind State of the Art algorithms that can Memorize/remember previous inputs in memory, When a huge set of Sequential data is given to it. Before we dig into details of Recurrent Neural networks, if
medium
8,175