text
stringlengths 301
426
| source
stringclasses 3
values | __index_level_0__
int64 0
404k
|
---|---|---|
Design, Design Process, Design Thinking, Design Management, UX Design.
some general activities that comprise most of the designer work. The level of detail is up to individual teams. One of the important distinctions is to separate qualitative from quantitative methods/activities because the scale of measurement is different. Example: Collaborative workshops such as | medium | 4,486 |
Design, Design Process, Design Thinking, Design Management, UX Design.
Innovation bootcamps or Hackathons are 1 shot workshops but made for a large number of participants, therefore, the effort in making it happen is still high. Design activities divided in qualitative and quantitative Complexity scale Complexity scale is mostly measured in time or sprints if your | medium | 4,487 |
Design, Design Process, Design Thinking, Design Management, UX Design.
projects are managed through an agile framework. How to define the duration? Sometimes it is not clear where the design process ends due to constant iterations and modifications. As long as a designer’s effort is requested it should be counted as time involved in the process. Some projects last | medium | 4,488 |
Design, Design Process, Design Thinking, Design Management, UX Design.
longer because they are more complex or just complicated. Therefore the complexity scale should be valued as the duration of “delivery” as a whole. Complexity scale measured in time or spirnts Deliverables The deliverables or the design delivery includes all significant artifacts produced during | medium | 4,489 |
Design, Design Process, Design Thinking, Design Management, UX Design.
the design activities. Sometimes some design tools are also deliverables. When there are multiple deliverables I’d suggest tracking the most all-encompassing one or if you would like to be accurate assign a percentage to each deliverable you’d like to track. The division of the percentages should | medium | 4,490 |
Design, Design Process, Design Thinking, Design Management, UX Design.
be a personal estimation of the designer or the team. Otherwise, you can develop a secondary metric to monitor it but I would avoid basing it exclusively on timespan. Design teams should decide depending on their focus what deliverables should be tracked. Since not all blueprints or wireframes have | medium | 4,491 |
Design, Design Process, Design Thinking, Design Management, UX Design.
the same complexity I’d suggest deciding on a criteria. In this particular example, I opted for the following; Light, Custom, Advanced, and Extensive deliverables. Here is an example of a filled-in matrix: Type of design deliverbales Management level This is the trickiest category as it entirely | medium | 4,492 |
Design, Design Process, Design Thinking, Design Management, UX Design.
depends on the design team and the division of roles in that team. My suggestion is to look at it from a general perspective of a team as a whole to keep it light. At a later moment, the information can be split between individual teammates to have a very detailed overview of the project. You can | medium | 4,493 |
Design, Design Process, Design Thinking, Design Management, UX Design.
also cross the data from the project planning sheets such as days and roles assigned. In the example below the management level is classified from the most involvement to the least involvement. DesignOPS means the highest involvement delivered while design supervision and consulting have the lowest | medium | 4,494 |
Design, Design Process, Design Thinking, Design Management, UX Design.
involvement delivered. The point system can be adjusted as well accordingly. Managment level in percentage The final goal: The advantages of monitoring design activity What is the value design teams can gain from it? Other than having internal KPIs as drivers and tools for goal setting, it can also | medium | 4,495 |
Design, Design Process, Design Thinking, Design Management, UX Design.
give us statistical data on how our team operates. However, the ultimate goal should be to have data-supported reports on how efficient and hardworking the team is to showcase to the upper management and important stakeholders. It should be used to point out the importance of certain design | medium | 4,496 |
Design, Design Process, Design Thinking, Design Management, UX Design.
activities, deliverables, and eventually the positive outcomes. A driven manager should try to crossmatch the KPI data sheets with the organizational ones such as customers gained, increased revenue, etc. Lastly, no designer loves math excessively but luckily the Excel tool can do that for you. You | medium | 4,497 |
Design, Design Process, Design Thinking, Design Management, UX Design.
can set up a few simple rules after you decide on categories, criteria, and what are minimum and maximum values. Let’s see how it all comes together with practical examples. Design KPIs on a project example As we can see from the example some categories have more weight than others. Overall higher | medium | 4,498 |
Design, Design Process, Design Thinking, Design Management, UX Design.
values should be assigned for categories; Deliverables and Management levels, because they speak the most about the objectives achieved by design teams. Tip: You can create a classification of all projects to enable comparison. The overall sum of points can correspond to descriptive and “catchy” | medium | 4,499 |
Design, Design Process, Design Thinking, Design Management, UX Design.
labels. Let’s see how these KPIs could speak to our stakeholders: “This year we have worked on a “Hello” design project that involved more than 100 users that were surveyed, interviewed, and took part in collaborative workshops. Indeed, we have organized 8 workshops in 3 different cities in the | medium | 4,500 |
Design, Design Process, Design Thinking, Design Management, UX Design.
country. The project had a rather large positive impact on the organization and thanks to our operational involvement and later supervision it was delivered in just under 1 year time.” Conclusion Use this approach as a starting point and customize it to your team’s needs. If you are not tracking | medium | 4,501 |
Design, Design Process, Design Thinking, Design Management, UX Design.
your design operations yet this will help you cover the main activities and outcomes. It works particularly well for small to medium size teams. How to define the scale for the dimensions and what points to assign is up to you. Remember that numbers serve only to help us understand and calculate | medium | 4,502 |
Design, Design Process, Design Thinking, Design Management, UX Design.
relations between categories. Most of the evaluations are left up to the experience and know-how of a designer, therefore I suggest continuously tracking your design KPIs, and with time it will become an easy routine. I encourage teams to curate it collaboratively and keep it alive and dynamic. The | medium | 4,503 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
Photo by Maxim Hopman on Unsplash Not All Lines Are Equal A wise technical analyst once told me that you are only as good as the lines you draw on the chart. When I loaded up TradeStation back in the early 2000s, I was a newborn baby deer trying to walk. I walked into the battleground a warrior. I | medium | 4,505 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
had prepared with the most important books. I had read the encyclopedia of Japanese candlesticks, The Zen Trader, and a book about basic technical indicators. Excited to see my first chart, I pulled up as many as I could. They were decorated head to toe in colors. I even fooled myself into thinking | medium | 4,506 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
I knew what I was doing. The worst part about it all is… I knew how to take a risk, just couldn’t manage one. So as many trading stories begin, I started taking massive winners. Luck was mistaken for skill. The fall from grace was inevitable. The day came where the market put me in my place. I went | medium | 4,507 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
from day dreaming about my future Ferrari, to a peasant once more. I knew it was time to pick myself off the floor after a night listening to Nickelback. A new low. After my evaluation, I realized the lines on my chart were no better than those a toddler draws. This is where I found out my lines | medium | 4,508 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
should mean something. Like many before me and many after me, I stepped into Linear Regression. It’s Linearly That Easy I often come across articles explaining the math, but not implementing these tools from scratch. I feel that when going down a machine learning path, it is important to be able to | medium | 4,509 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
build what you are implementing. While great packages exist which you certainly should use, it’s important to understand what is happening under the hood. Machine Learning will be written about often here and my goal will be to show how to build it yourself. If you would rather watch it, hop over | medium | 4,510 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
to our YouTube. Linear regression is a great starting point. It introduces many reoccurring themes while remaining somewhat easy to understand. Most people are familiar with a lot of the math that will be show. If you are not this is still an amazing starting point. Don’t make the same mistakes as | medium | 4,511 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
me. Learn the right lines. What Do We Need to Know Before We Start? We will be building a linear regression class from scratch. This will be able to handle both simple linear regression, and multiple linear regression. The math will be introduced in the next section. For now, understand that the | medium | 4,512 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
first fits a line, and the second a hyperplane to data. Essentially, one can handle multiple feature inputs and is not restricted to a 2D plane. There are a few key requirements we would like to learn when using a linear regression model. Linear Assumption — The model assumes that the relationship | medium | 4,513 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
between the variables is linear. Rescale Input Data — We want to normalize or scale our input data. This will make more sense when you see the models, but if a features input in much bigger than the others it could dominate the model. This makes for a sad linear regression. No Collinearity — We | medium | 4,514 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
would like the input data to not be highly correlated. If our features are too correlated to each other, then it will overfit. The inputs would be providing redundant information. Noise Removed — The model will assume that the input and output variables are not noisy so we must remove outliers if | medium | 4,515 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
we can. Normal Distribution — The model will make better predictions if we have normally distributed input and output data. We would like the data to be as normal as possible. After we recognize the above requirements, it is time to get on with it! We must train our model which means finding the | medium | 4,516 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
best coefficients for the linear formula. We can do this through gradient descent. We will get more into that after introducing all the fun math behind this. If you can’t tell already, we’re about to have more fun than when the 50 and 200 moving averages touch tips… I mean cross over. Is the Math | medium | 4,517 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
Hard? Not as hard as the beating the market gave me when I did no math. With an understanding of calculus, you will be totally fine. Even if you don’t know calculus, with a little more time it will make sense. simple and multiple linear regression The first equation is the simple linear regression | medium | 4,518 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
which differs very slightly from the multiple linear regression. You likely have seen this in a math textbook as far back as high school. Y = mx + b was the move back in the days. It turns out it may have some use. Often you will see the Beta{0} value as a b, and the Beta{i} values as w. Beta{0} is | medium | 4,519 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
going to be your bias, and the other Betas will be your weights. These are the parts that you will optimize for. Our inputs are the x’s, with a predicted value of yhat (ŷ). The simple way to make the top formula become a multiple linear regression is by having these values be represented as | medium | 4,520 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
matrixes. You will see this done in code, so I won’t be writing them all out as such. As I mentioned above, we need to optimize those parameters. What are we optimizing for? This is where our handy dandy cost function comes in. Our goal is going to be to minimize the cost function. A common one | medium | 4,521 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
that we use is the mean squared error (MSE), and it makes a lot of sense. Mean Squared Error The above is the mean squared error formula. Notice how we can swap our yhat from above into the formula if we would like to expand it. This is the average difference between the actual value and our | medium | 4,522 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
predicted value. So, it would make sense that we want to minimize this! If we want to minimize MSE we need to find the weights and bias that give us the lowest value. If you are familiar with partial derivatives, then the next part will make a lot of sense. If you are not, just know that it tells | medium | 4,523 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
us the rate of change of a given multivariable function with respect to one chosen variable while treating all other variables as constant. partial derivative formulas So, as you can see, we are taking the partial with respect to the weights and with respect to the bias. I swapped the Betas out for | medium | 4,524 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
simplicity. Once we have attained that all we have to do is something called gradient descent. This is where we update the previous iterations weights and biases by subtracting the partial derivative multiplied by a learning rate. This learning rate should not be too large or small. If it is too | medium | 4,525 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
large it may skip right over the minimum we are looking for, and if it is too small it may take a long time. Ain’t nobody got time for that. When you put it together, we will have two formulas that look like this for our updates. update formula As you can see, we have our new weights / bias equal | medium | 4,526 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
to their value subtracted by a learning rate (alpha) multiplied by the partial. With all the math listed, you are now ready to see it implemented in code! Cooking Up Code In Our Crockpot I will start by writing the functions out one by one. Then at the end I will leave the entire code put into a | medium | 4,527 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
class. From there you can play around with it and try it out on some data! Remember that the requirements listed above are important. If you are using it on financial data, be very skeptical of your results. If it seems too good to be true, it probably is! So first in our linear regression class I | medium | 4,528 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
think we should have a function that can let us track mean squared error if we would like. This can let us plot and track our loss over a bunch of iterations. import numpy as np def mean_squared_error(y, y_hat): return np.square(np.subtract(np.array(y), np.array(y_hat))).mean() The above code | medium | 4,529 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
performs the MSE calculation with numpy. This will allow us to put in an array of values for y, and yhat. It will first subtract the two vectors. Then it will square the values and find the mean. It is often a good idea to try and find ways to vectorize your calculations. Machine learners take a | medium | 4,530 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
lot of data. If the dataset you are training on is huge then doing little things like this will speed it up. Next, we will write the functions to fit, update, and predict the model. import numpy as np class LinearRegression() : def __init__(self, learning_rate, iterations): self.lr = learning_rate | medium | 4,531 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
self.iterations = iterations # Fit the Regression def fit(self, X, Y): # Rows and Columns self.m, self.n = X.shape # Initialize array of zeros size of X's columns self.W = np.zeros(self.n) self.b = 0 self.X = X self.Y = Y # Learning through gradient descent for i in range(self.iterations): | medium | 4,532 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
self.update() return self # Cleans up fit() by updating in seperate method def update(self): Y_pred = self.predict(self.X) # Partial Derivatives (vectorized) dW = -(2 * (self.X.T).dot(self.Y - Y_pred)) / self.m db = -2 * np.sum(self.Y - Y_pred) / self.m # update weights self.W = self.W - self.lr * | medium | 4,533 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
dW self.b = self.b - self.lr * db return self # Model prediction with current weights and bias def predict(self, X): return X.dot(self.W) + self.b The above is a full class that implements the functions we described. We initialize the class with the number of iterations we would like it to do, and | medium | 4,534 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
a learning rate. We then fit on our training set of data through the fit method. This will go through every iteration and update the weights and bias. This calls the update method which performs gradient descent. Our predict function is just our simple prediction. You will notice that everything is | medium | 4,535 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
done using matrixes. This makes the computation faster so when training on a lot of data it doesn’t take us forever. I’d prefer to finish fast (lol). The code above is literally just plugging in the math. We constantly get our prediction, and then tweak the parameters a bit using gradient descent. | medium | 4,536 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
This continues until the iterations are over. We would then optimally have a model that makes us that sweet sweet cash! Conclusion Today was an important day! You figured out what lines are the right lines! Pat yourselves on the back. This is a major step in the right direction. While linear | medium | 4,537 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
regression is not always the most effective tool, it can often be quite powerful in certain situations. We can use it for a wide variety of problems beyond just predicting price. The linear regression is the building block for many more complex models to come, and a great introduction. You will | medium | 4,538 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
find that it is quite powerful when you think outside of the box. Often, we want to know if something has a linear relationship, and this helps us do it! With great knowledge comes great power. Use what you’ve learned for good! I will add a link to our YouTube linear regression video if you’d like | medium | 4,539 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
to see the code in action! I will put the MSE in with the class under here as well. import numpy as np class LinearRegression() : def __init__(self, learning_rate, iterations): self.lr = learning_rate self.iterations = iterations self.loss = [] @staticmethod def mean_squared_error(y, y_hat): return | medium | 4,540 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
np.square(np.subtract(np.array(y), np.array(y_hat))).mean() # Fit the Regression def fit(self, X, Y): # Rows and Columns self.m, self.n = X.shape # Initialize array of zeros size of X's columns self.W = np.zeros(self.n) self.b = 0 self.X = X self.Y = Y # Learning through gradient descent for i in | medium | 4,541 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
range(self.iterations): self.update() return self # Cleans up fit() by updating in seperate method def update(self): Y_pred = self.predict(self.X) # Log to be able to view later mse = self.mean_squared_error(self.Y, Y_pred) loss.append(mse) # Partial Derivatives (vectorized) dW = -(2 * | medium | 4,542 |
Finance, Trading, Stock Market, Mathematics, Technical Analysis.
(self.X.T).dot(self.Y - Y_pred)) / self.m db = -2 * np.sum(self.Y - Y_pred) / self.m # update weights self.W = self.W - self.lr * dW self.b = self.b - self.lr * db return self # Model prediction with current weights and bias def predict(self, X): return X.dot(self.W) + self.b If you found this or | medium | 4,543 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
Self-improvement Tap into this hidden power, to accomplish anything in life. Image by Brett on Unsplash This morning I was going through a book by Anthony Robbin “Awaken the Giant Within” and after going through a few pages I stumbled upon this concept of true decision-making and how it can help us | medium | 4,545 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
achieve anything. In the first go, I didn't pay much attention and continued reading but then I took a pause and read for the second time for clarity. and honestly, I was amazed at how simple yet powerful this concept is. Most of us don't know how to utilize the power of decision to transform our | medium | 4,546 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
lives. It’s not some fancy self-help tip that looks good on the outside but has no practicality. I have used it and got the results and you can too. Look, we all make decisions every day. The reason why 90% of us don't succeed in our pursuits to achieve something is the lack of clarity and | medium | 4,547 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
commitment to our decision. What is that one thing you want to do that you have been putting off for long? It can be quitting smoking, waking up early, managing your finances, career change, or building that dream body. whatever it is my friend. enough of you thinking and making plans. it’s time to | medium | 4,548 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
act upon it. The reason you've been putting it off is a lack of determination. we all have that power lying dormant inside us. To unlock the power of true decision you’ll have to take one. whatever you decide it should have two elements clarity of what you want and determination to stay at it. | medium | 4,549 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
these 2 elments will bring forth all the answers required to move ahead. There is something magical about making a committed decision. Now what’s committed decision-making you may ask— It’s the decision that forces you to take action. The power of decision works only in a state of emergency or in | medium | 4,550 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
other words when you leave no other option available than to take action. it has to be a do-or-die situation for you. In that state of emergency, your system aligns with that unknown power of the universe to guide you. to set you in motion to act. There’s a 4 step formula that can help you | medium | 4,551 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
accomplish possibly anything — If you clearly decide what it is you’re absolutely committed to achieving. you’re willing to take the required action. You continuously notice what’s going well and what needs improvement. You keep changing accordingly till you accomplish your goal. Let’s understand | medium | 4,552 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
with the help of an example what true decision-making looks like— Two people reach up to a doctor for a check-up. One is a patient suffering from terminal cancer and the other is an average healthy person who wants to live a more active and healthy lifestyle. the doctor recommends lifestyle changes | medium | 4,553 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
for both of them. To the cancer patient say that you’re already on stage 4 cancer and don't have much time left so go live happily, eat a healthy whole food plant-based diet, manage stress, and exercise daily for 30 minutes. and spend time with your loved ones. Similar advice was given to the | medium | 4,554 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
average healthy person seeking a healthy lifestyle. but who do you think will follow the doctor’s advice more seriously? The person fighting for life will leave no stone unturned and will do everything possible to live. The urgency to live will bring about all the necessary changes he wants because | medium | 4,555 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
he doesn't have a choice. that’s where my friend the hidden power of decision comes into action. it gives answers to all your hows, where, and when. “It’s in your moments of decision that your destiny is shaped” — Anthony Robbins In a nutshell — Realize that the hardest of achieving anything is | medium | 4,556 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
making committed decisions. Make decisions intelligently but quickly don't take forever to decide. Learn from your decisions — There is no other way around. Nobody’s perfect at times you’re going to screw it, no matter what you do. when the inevitable happens learn from your experience. ask | medium | 4,557 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
yourself what went wrong and how you can use this information to succeed in the future. The more decisions you make the more your decision-making muscle gets stronger, unleash your power right now, make some decisions you have been putting off, and see the magic happen. Wrapping up with a quote for | medium | 4,558 |
Self Improvement, Self, Self-awareness, Decision Making, Self Love.
you to ponder upon — “Life is either a daring adventure or nothing” — Helen Keller Thanks for Reading :) Subscribe to never miss a story. Let’s catch up on LinkedIn. 7 Life Lessons People Learn too Late in Life Our everyday habits decide our future selves.sagrikaoberoi.medium.com | medium | 4,559 |
Physics, Python Programming, Runge Kutta.
One of the most commons math problems that I stumbled across in grad school were Ordinary Differential Equations, otherwise known as ODEs and one of the challenges that had me stumped for a while was, how do I solve ODEs in Python? Thankfully, I was able to stumble across two methods, the | medium | 4,560 |
Physics, Python Programming, Runge Kutta.
Runge-Kutta method and SciPy’s built-in function. Runge-Kutta Method The Runge-Kutta method was a numerical approximation for ODE’s, developed by Carl Runge and Wilhelm Kutta. By using four slope values within an interval, that do not necessarily fall on the actual solution, and averaging out the | medium | 4,561 |
Physics, Python Programming, Runge Kutta.
slopes, one can get a pretty nice approximation of the solution. For a more in detail explanation of the Runge-Kutta method and its variations, I highly suggest researching the history, derivation and applications using your favorite textbook/website. Now for this example, we will be focusing on | medium | 4,562 |
Physics, Python Programming, Runge Kutta.
the Fourth Order Runge-Kutta Method to help us solve the 1D scattering problem. Coding To start off our code, we are going to import some packages that will help us with the math and the visualization. import cmath #To help us out with the complex square root import numpy as np #For the arrays | medium | 4,563 |
Physics, Python Programming, Runge Kutta.
import matplotlib.pyplot as plt #Visualization From here, we then start defining our initial parameters for the equations. mass = 1.0 #Mass, one for simplicity hbar = 1.0 #HBar, one for simplicity v0 = 2.0 #Initial potential value alpha = 0.5 #Value for our potential equation E = 3.0 #Energy i = | medium | 4,564 |
Physics, Python Programming, Runge Kutta.
1.0j #Defining imaginary number x = 10.0 #Initial x-value xf = -10.0 #Final x-value h = -.001 #Step value xaxis = np.array([], float) #Empty array to fill with out x-values psi = np.array([], complex) #Empty array to fill with the values for the initial equation we are trying to solve, defined | medium | 4,565 |
Physics, Python Programming, Runge Kutta.
array as complex to fill with complex numbers psiprime = np.array([], complex) #Empty array to fill with the values for the first derivative equation we are trying to solve, defined array as complex to fill with complex numbers Once we have our initial values, we then start to work on our functions | medium | 4,566 |
Physics, Python Programming, Runge Kutta.
that define the equations we are about to use. The main equation we have is k(x), which is reworked version of Schrödinger's equation to solve for the variable k, as well as our Ψ equations, which will be defined by psione(x) and psitwo(x). To further explore the equation, MIT OCW has free lectures | medium | 4,567 |
Physics, Python Programming, Runge Kutta.
that anyone can access for free. def v(x): #Potential equation we will be using for this example return v0/2.0 * (1.0 + np.tanh(x/alpha)) def k(x): #Reworked Schrödinger's equation to solve for k return cmath.sqrt((2*mass/(hbar**2))*(E - v(x))) def psione(x): #PSI, wavefunction equation return | medium | 4,568 |
Physics, Python Programming, Runge Kutta.
np.exp(i*k(x)*x) def psitwo(x): #Derivative of the psione equation return i*k(x)*np.exp(i*k(x)*x) Now this is where we get to the good part. First, we need to define an array that contains our initial condition wavefunctions. r = np.array([psione(x), psitwo(x)]) #Array with wavefunctions, usually | medium | 4,569 |
Physics, Python Programming, Runge Kutta.
this is where our initial condition equations go. With these equations set in our array, we can iterate both of these equations through the Runge-Kutta method, which will be defined below, and have them give us the approximate solutions for the equation we defined for psione(x) and psitwo(x). But | medium | 4,570 |
Physics, Python Programming, Runge Kutta.
before we reach the main part of the equation, we need to define one more important function. def deriv(r,x): return np.array([r[1],-(2.0*mass/(hbar**2) * (E - v(x))*r[0])], complex) #The double star, **, is for exponents The deriv function is where the outputs from the Runge-Kutta get passed | medium | 4,571 |
Physics, Python Programming, Runge Kutta.
through, this function grabs our values from the array r and then pushes it through these conditions. For the first value that gets returned, it’s quite simple, our x value will just be inputted into the second equation of the array. The second value, however, will be going through a different | medium | 4,572 |
Physics, Python Programming, Runge Kutta.
treatment. This time, the x value will be going through another iteration of Schrodinger’s equation, one that takes into consideration the wave function psione(x). #While loop to iterate through the Runge-Kutta. This particular version, the Fourth Order, will have four slope values that help | medium | 4,573 |
Physics, Python Programming, Runge Kutta.
approximate then next slope value, from k1 to k2, k2 to k3, and k3 to k4. #This loop also appends that values, starting with the initial values, to the empty arrays that we've initialized earlier. while (x >= xf ): xaxis = np.append(xaxis, x) psi = np.append(psi, r[0]) psiprime = | medium | 4,574 |
Physics, Python Programming, Runge Kutta.
np.append(psiprime, r[1]) k1 = h*deriv(r,x) k2 = h*deriv(r+k1/2,x+h/2) k3 = h*deriv(r+k2/2,x+h/2) k4 = h*deriv(r+k3,x+h) r += (k1+2*k2+2*k3+k4)/6 x += h #The += in this line, and the line above, is the same thing as telling the code to x = x + h, which updates x, using the previous x with the | medium | 4,575 |
Physics, Python Programming, Runge Kutta.
addition of the step value. Here, the loop pretty much goes over the whole process of the Runge-Kutta. By using approximations of the slopes, as defined by the k values, each k value helps approximate the next slope, bringing us one step closer to solving for f(x). Furthermore, after getting each | medium | 4,576 |
Physics, Python Programming, Runge Kutta.
of our slopes, we then obtain a weighted average and update our array with these new values to get the ready for the next iteration. This process will continue for our defined range in the x-axis, which will end up giving us the necessary values for plotting our soon to be solved ODE. Overview The | medium | 4,577 |
Physics, Python Programming, Runge Kutta.
Runge-Kutta Method can be easily adapted to plenty of other equations, most of the time we just have to adjust the deriv function, and our initial condition equations. Other examples include the pendulum ODEs and planetary motion ODEs. Down below, one can now find the full code, along with extra | medium | 4,578 |
Physics, Python Programming, Runge Kutta.
steps, such as the functions to solve for the reflection and transmission values, as well as how to plot our values. import cmath #To help us out with the complex square root import numpy as np #For the arrays import matplotlib.pyplot as plt #Visualization mass = 1.0 #Mass, one for simplicity hbar | medium | 4,579 |
Physics, Python Programming, Runge Kutta.
= 1.0 #HBar, one for simplicity v0 = 2.0 #Initial potential value alpha = 0.5 #Value for our potential equation E = 3.0 #Energy i = 1.0j #Defining imaginary number x = 10.0 #Initial x-value xf = -10.0 #Final x-value h = -.001 #Step value xaxis = np.array([], float) #Empty array to fill with out | medium | 4,580 |
Physics, Python Programming, Runge Kutta.
x-values psi = np.array([], complex) #Empty array to fill with the values for the initial equation we are trying to solve, defined array as complex to fill with complex numbers psiprime = np.array([], complex) #Empty array to fill with the values for the first derivative equation we are trying to | medium | 4,581 |
Physics, Python Programming, Runge Kutta.
solve, defined array as complex to fill with complex numbers def v(x): #Potential equation we will be using for this example return v0/2.0 * (1.0 + np.tanh(x/alpha)) def k(x): #Reworked Schrödinger's equation to solve for k return cmath.sqrt((2*mass/(hbar**2))*(E - v(x))) def psione(x): #PSI, | medium | 4,582 |
Physics, Python Programming, Runge Kutta.
wavefunction equation return np.exp(i*k(x)*x) def psitwo(x): #Derivative of the psione equation return i*k(x)*np.exp(i*k(x)*x) r = np.array([psione(x), psitwo(x)]) #Array with wavefunctions, usually this is where our initial condition equations go. def deriv(r,x): return | medium | 4,583 |
Physics, Python Programming, Runge Kutta.
np.array([r[1],-(2.0*mass/(hbar**2) * (E - v(x))*r[0])], complex) #The double star, **, is for exponents #While loop to iterate through the Runge-Kutta. This particular version, the Fourth Order, will have four slope values that help approximate then next slope value, from k1 to k2, k2 to k3, and | medium | 4,584 |
Physics, Python Programming, Runge Kutta.
k3 to k4. #This loop also appends that values, starting with the initial values, to the empty arrays that we've initialized earlier. while (x >= xf ): xaxis = np.append(xaxis, x) psi = np.append(psi, r[0]) psiprime = np.append(psiprime, r[1]) k1 = h*deriv(r,x) k2 = h*deriv(r+k1/2,x+h/2) k3 = | medium | 4,585 |
Physics, Python Programming, Runge Kutta.
h*deriv(r+k2/2,x+h/2) k4 = h*deriv(r+k3,x+h) r += (k1+2*k2+2*k3+k4)/6 x += h #The += in this line, and the line above, is the same thing as telling the code to x = x + h, which updates x, using the previous x with the addition of the step value. #Grabbing the last values of the arrays and | medium | 4,586 |
Physics, Python Programming, Runge Kutta.
redefining our x-axis psi1 = psi[20000]; psi2 = psiprime[20000]; x = 10; xf = -10 def reflection(x, y): aa = (psi1 + psi2/(i*k(y)))/(2*np.exp(i*k(y)*y)) bb = (psi1 - psi2/(i*k(y)))/(2*np.exp(-i*k(y)*y)) return (np.abs(bb)/np.abs(aa))**2 def transmission(x,y): aa = (psi1 + | medium | 4,587 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.