text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. are 3 types of Covariance can be seen:- Positive Covariance Negative Covariance Zero Covariance Positive Covariance We define there is a positive relationship between two random variables X and Y when Cov(X, Y) is positive. When X increases, Y also increases There should be a directly proportional
medium
1,961
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. relationship between two random variables Consider the following example. In the above diagram, when X increases Y also gets increases. As we said earlier if this is a case then we term Cov(X, Y) is +ve. Let’s consider two points that denoted above i.e. (X1, Y1) and (X2, Y2). The mean of both the
medium
1,962
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. random variable is given by μx and μy respectively. (X1-μx) = This operation returns a positive value as X1 > μx (Y1-μy) = This operation returns a positive value as Y1 > μy Thus multiplication of both positive numbers will be positive. (X2-μx) = This operation returns a negative value as X2 < μx
medium
1,963
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. (Y2-μy) = This operation returns a negative value as Y2 < μy Thus multiplication of both negative numbers will be positive. Means if we have such a relationship between two random variables then covariance between them also will be positive. Negative Covariance We define there is a negative
medium
1,964
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. relationship between two random variables X and Y when Cov(X, Y) is -ve. When X increases, Y decreases. When there is an inversely proportional relationship between two random variables. Consider the following example, In the above diagram, we can clearly see as X increases, Y gets decreases. This
medium
1,965
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. is the case of Cov(X, Y) is -ve. Let’s check on two points (X1, Y1) and (X2, Y2) The mean of both the random variable is given by μx and μy respectively. (X1-μx) = This operation returns a positive value as X1 > μx (Y1-μy) = This operation returns a negative value as Y1 < μy Thus multiplication of
medium
1,966
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. positive and negative numbers will be negative. (X2-μx) = This operation returns a negative value as X2 < μx (Y2-μy) = This operation returns a positive value as Y2 > μy Thus multiplication of positive and negative will be negative. Means if we have such a relationship between two random variables
medium
1,967
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. then covariance between them also will be negative. Zero Covariance When there is NO RELATIONSHIP between two random variables. Then it is said to be ZERO covariance between two random variables. In this scenario, the data points scatter on X and Y axis such way that there is no linear pattern or
medium
1,968
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. relationship can be drawn from them. Image Source: https://www.slideshare.net/JonWatte/covariance This can also happen when both the random variables are independent of each other. However, the covariance between two random variables is ZERO that does not necessary means there is an absence of a
medium
1,969
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. relationship. A Nonlinear relationship can exist between two random variables that would result in a covariance value of ZERO! Properties of Covariance Covariance with itself is nothing but the variance of that variable. When random variables are multiplied by constants (let's say a & b) then
medium
1,970
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. covariance can be written as follows: Covariance between a random variable and constant is always ZERO! Cov( X, Y) is as same as Cov(Y, X) Drawbacks of using Covariance When we say that the covariance between two random variables is +ve or -ve but we cannot gives the answer to How much positive? or
medium
1,971
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. How much negative? etc. Covariance is completely dependent on scales/units of numbers. Therefore it is difficult to compare the covariance among the dataset having different scales. This drawback can be solved using Pearsons Correlation Coefficient (PCC). Pearsons Correlation Coefficient (PCC) In
medium
1,972
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. statistics, a correlation coefficient is used to describe how strong is the relationship between two random variables. There are several types of correlation coefficients: Pearson’s Correlation Coefficient (PCC) and the Spearman Rank Correlation Coefficient (SRCC). Few real-life cases you might
medium
1,973
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. want to look at- The more time you spend running on a treadmill, the more calories you will burn. The less time I spend marketing my business, the fewer new customers I will have. As the temperature goes up, ice cream sales also go up. As the weather gets colder, air conditioning costs decrease. If
medium
1,974
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. a car decreases speed, travel time to a destination increases. As the temperature decreases, more heaters are purchased. Every correlation coefficient has direction and strength. The direction is mainly dependent on the sign. Thus it classifies correlation further- Positive Correlation: If two
medium
1,975
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. random variables move together that is one variable increases as other increases then we label there is positive correlation exist between two variables. Ex: As the temperature goes up, ice cream sales also go up. Negative Correlation: If two random variables move in the opposite direction that is
medium
1,976
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. as one variable increases other variable decreases then we label there is negative correlation exist between two variable. Ex: As the weather gets colder, air conditioning costs decrease No / Zero Correlation: If two random variables show no relationship to one another then we label it as Zero
medium
1,977
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Correlation or No Correlation. Ex: There is no relationship between the amount of tea drunk and level of intelligence. Image Source: https://www.simplypsychology.org/correlation.html If you look at the above diagram, basically its scatter plot. Drawing scatter plot will help us understanding if
medium
1,978
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. there is a correlation exist between two random variable or not. Above scatter plot just describes which types of correlation exist between two random variables (+ve, -ve or 0) but it does not quantify the correlation that's where the correlation coefficient comes into the picture. Lets deep dive
medium
1,979
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. into Pearson’s correlation coefficient (PCC) right now. Pearson’s correlation coefficient formulas are used to find how strong a relationship is between data. The formulas return a value between -1 and 1, where: 1 indicates a strong positive relationship. -1 indicates a strong negative
medium
1,980
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. relationship. A result of zero indicates no relationship at all. Case 1: In the first diagram, we can see both X & Y are negatively correlated. PCC returns -1 value if and only if the values of X and Y falls exactly on the same line. (Means there should be a strictly linear relationship between two
medium
1,981
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. variables) In the second diagram, we can see both X & Y are negatively correlated yet PCC returns a value between -1 to 0 as data points don’t fall exactly on a linear line. There are some data points are scattered around the line. (There is no strict linear relationship between variables) Case 2:
medium
1,982
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. In the first diagram, we can see there is some sort of linear relationship between X and Y though they aren’t perfectly liner but there is a positive linear relationship exists between two random variables. Therefore PCC returns a value between 0 to 1 In the second diagram, X & Y are perfectly
medium
1,983
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. correlated. data points fall on a single line. Therefore we can say, it’s a strong linear relationship hence PCC will return value of +1. Case 3: In the above case, there is no linear relationship that can be seen between two random variables. There is an absence of a linear relationship between
medium
1,984
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. two random variables but that doesn’t mean there is no relationship at all. There could be a possibility of a non-linear relationship but PCC doesn’t take that into account. This is the perfect example of Zero Correlation. Thus PCC returns the value of 0. Until now we have seen the cases about PCC
medium
1,985
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. returning values ranging between -1 < 0 < 1. But have you ever wondered, how do we get these values? In the above formula, PCC can be calculated by dividing covariance between two random variables with their standard deviation. If we unfold further above formula then we get the following As stated
medium
1,986
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. earlier, above formula returns the value between -1 < 0 < +1. But these value needs to be interpreted well in the statistics. Below table will help us to understand the interpretability of PCC:- Limitations: The correlation coefficient always assumes the linear relationship between two random
medium
1,987
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. variables regardless of the fact whether the assumption holds true or not. Computationally expensive. It takes more time to calculate the PCC value. The first limitation can be solved. There is another correlation coefficient method named Spearman Rank Correlation Coefficient (SRCC) can take the
medium
1,988
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. non-linear relationship into account. Since SRCC takes monotonic relationship into the account it is necessary to understand what Monotonocity or Monotonic Functions means. Monotonic Functions The monotonic functions preserve the given order. The term monotonic means no change. There are four types
medium
1,989
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. of monotonic functions Monotonically Increasing Function Strictly Monotonically Increasing Function Monotonically Decreasing Function Strictly Monotonically Decreasing Function Monotonic function g(x) is said to be monotonic if x increases g(x) also increases. Such function is called Monotonically
medium
1,990
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Increasing Function. If x1 > x2 then g(x1) ≥ g(x2); Then g(x) is said to be monotonically increasing function. Monotonically Increasing Function If x1 > x2 then g(x1) > g(x2); Then g(x) is said to be strictly monotonically increasing function. Strictly Monotonically Increasing Function Monotonic
medium
1,991
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. function g(x) is said to be monotonic if x increases g(x) decreases. Such function is called Monotonically Decreasing Function. If x1 < x2 then g(x1) ≥ g(x2); Thus g(x) is said to be Monotonically Decreasing Function. If x1 < x2 then g(x1) > g(x2); Thus g(x) is said to be Strictly Monotonically
medium
1,992
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Decreasing Function Monotonically Decreasing Function and Strictly Monotonically Decreasing Function Now we have understood the Monotonic Function or monotonic relationship between two random variables its time to study concept called Spearman Rank Correlation Coefficient (SRCC) Spearman Rank
medium
1,993
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Correlation Coefficient (SRCC) The Spearman Rank Correlation Coefficient (SRCC) is the nonparametric version of Pearson’s Correlation Coefficient (PCC). Here nonparametric means a statistical test where it's not required for your data to follow a normal distribution. They’re also known as
medium
1,994
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. distribution-free tests and can provide benefits in certain situations. Image Source: statistics.laerd.com The Spearman correlation evaluates the monotonic relationship between two continuous or ordinal variables In a monotonic relationship, the variables tend to change together, but not
medium
1,995
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. necessarily at a constant rate. In SRCC we first find the rank of two variables and then we calculate the PCC of both the ranks. Thus we can define Spearman Rank Correlation Coefficient (SRCC) as below The Spearman Rank Correlation Coefficient (SRCC) is a nonparametric test of finding Pearson
medium
1,996
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Correlation Coefficient (PCC) of ranked variables of random variables. Since SRCC evaluate the monotonic relationship between two random variables hence to accommodate monotonicity it is necessary to calculate ranks of variables of our interest. How do we calculate the rank will be discussed later.
medium
1,997
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Spearman’s Rank Correlation Coefficient also returns the value from -1 to +1 where +1 = a perfect positive correlation between ranks -1 = a perfect negative correlation between ranks 0 = no correlation between ranks. Steps for calculation Spearman’s Correlation Coefficient: Step 1: Check for a
medium
1,998
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. monotonic relationship. Step 2: Calculate the Rank of two variables Step 3: Calculate the PCC of the ranked variables. How do we rank the variables? This is important to understand how to calculate the ranks of two random variables since Spearman’s Rank Correlation Coefficient based on the ranks of
medium
1,999
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. two variables. Below example will help us understand the process of calculation:- The scores for nine students in physics and math are as follows: Physics: 35, 23, 47, 17, 10, 43, 9, 6, 28 Mathematics: 30, 33, 45, 23, 8, 49, 12, 4, 31 Compute the student’s ranks in the two subjects and compute the
medium
2,000
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Spearman rank correlation. Step 1:- Let's visualize above and see whether the relationship between two random variables linear or monotonic? Scatter Plot:- Physics Vs Mathematics As we can see the relationship between two random variables is not linear but monotonic in nature. This fulfils our
medium
2,001
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. first step of the calculation. Step 2:- Process of calculating ranks: The lowest value will be ranked 1 Subsequent values ranked accordingly When you have two identical values in the data (called a “tie”), you need to take the average of the ranks that they would have otherwise occupied. If two
medium
2,002
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. similar value lets say on 6th and 7th position then average (6+7)/2 would result in 6.5. This rank to be added for similar values. In the above table, we calculated the ranks of Physics and Mathematics variables. There is no tie situation here with scores of both the variables. Step 3:- Calculate
medium
2,003
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Standard Deviation & Covariance of Rank. (This step is necessary when there is a tie between the ranks. If not, please ignore this step) Step 4: Calculate SRCC There are two methods to calculate SRCC based on whether there is tie between ranks or not. If there is no tie between rank use the
medium
2,004
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. following formula to calculate SRCC Here di is nothing but the difference between the ranks. For example, the first student’s physics rank is 3 and math rank is 5, so the difference is 2 and that number will be squared. Its good practice to add another column d-Squared to accommodate all the values
medium
2,005
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. as shown below. If there is a tie between ranks use the following formula to calculate SRCC SRCC = PCC of its Ranks. In our example stated above, there is no tie between the ranks hence we will be using the first formula mentioned above. ρ = 1-[(6 * 12) / 9*(81–1) ρ = 0.9 The Spearman Rank
medium
2,006
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. Correlation for this set of data is 0.9 Advantages over PCC: SRCC doesn’t require a linear relationship between two random variables. It doesn’t matter what relationship is but when X increasing, Y also increasing & X is decreasing then Y also decreasing then SRCC works well. SRCC handles outlier
medium
2,007
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. where PCC is very sensitive to outliers. Source: Wikipedia The Spearman correlation is less sensitive than the Pearson correlation to strong outliers that are in the tails of both samples. That is because Spearman’s rho limits the outlier to the value of its rank Significance Test When we quantify
medium
2,008
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. the relationship between two random variables using one of the techniques that we have seen above can only give a picture of samples only. (We are making this assumption as most of the time we are dealing with samples only) Sometimes our objective is to draw a conclusion about the population
medium
2,009
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. parameters; to do so we have to conduct a significance test. The significance test is something that tells us whether the sample drawn is from the same population or not. We will be using hypothesis testing to make statistical inferences about the population based on the given sample. Here I will
medium
2,010
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. be considering Pearson’s Correlation Coefficient to explain the procedure of statistical significance test. The objective of this test is to make an inference of population ρ based on sample r. Let’s define our Null and alternate hypothesis for this testing purposes. The hypothesis testing will
medium
2,011
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. determine whether the value of the population correlation parameter ρ is significantly different from 0 or not. We will conclude this based upon the sample correlation coefficient r and sample size n. If we get ρ value 0 or close to 0 then we can conclude that there is not enough evidence to prove
medium
2,012
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. the relationship between x and y. If there is a correlation between x and y in a sample but does not occur the same in the population then we can say that occurrence of correlation between x and y in the sample is due to some random chance or it just mere coincident. Let’s see what are the steps
medium
2,013
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. that required to run a statistical significance test on random variables. Step 1: Define your hypothesis Defining the hypothesis is nothing but the defining null and alternate hypothesis. Remember, we are always trying to reject null hypothesis means alternatively we are accepting the alternative
medium
2,014
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. hypothesis. In our case accepting alternative hypothesis means proving that there is a significant relationship between x and y in the population. Null hypothesis H0: ρ = 0 Alternative hypothesis H1: ρ ≠ 0 Step 2: Student’s t-Test The student’s t-test is used to generalize about the population
medium
2,015
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. parameters using the sample. Image Source: https://fabian-kostadinov.github.io Here, n is the sample size r is the sample correlation coefficient value Once we get the t-value depending upon how big it is we can decide whether the same correlation can be seen in the population or not. But, the
medium
2,016
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. challenge is how big is actually big enough that needs to be decided. This is where the p-value comes into the picture. But what is the p-value? P-Value Actually, a p-value is used in hypothesis testing to support or reject the null hypothesis. It is the evidence against the null-hypothesis. The
medium
2,017
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. smaller the p-value, the stronger the evidence that you should reject the null hypothesis. Thus, in other words, we can say that a p-value is a probability that the null hypothesis is true. Let's say you get the p-value that is 0.0354 which means there is a 3.5% chance that the result you got is
medium
2,018
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. due to random chance (or it is coincident) If you get the p-value that is 0.91 which means there a 91% chance that the result you got is due to random chance or coincident. It means the result is completely coincident and it is not due to your experiment. Therefore the smaller the p-value, the more
medium
2,019
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. important or significant. In statistics, we keep some threshold value 0.05 (This is also known as the level of significance α) If the p-value is ≤ α, we state that there is less than 5% chance that result is due to random chance and we reject the null hypothesis. If the p-value is > α, we fail to
medium
2,020
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. reject the null hypothesis. The calculation of p-value can be done with various software. If we want to calculate manually we require two values i.e. t-value and degrees of freedom. Correlation Vs Regression Image Source: https://keydifferences.com The difference between Correlation and Regression
medium
2,021
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. is one of the most discussed topics in data science. This question is also part of most data science interviews. Let’s understand it thoroughly so we can never get confused in this comparison. Correlation is a statistical measure which determines the direction as well as the strength of the
medium
2,022
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. relationship between two numeric variables. In the other hand, regression is also a statistical technique used to predict the value of a dependent variable with the help of an independent variable. In correlation, we find the degree of relationship between two variable, not the cause and effect
medium
2,023
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. relationship like regressions. The value of the correlation coefficient varies between -1 to +1 whereas, in the regression, a coefficient is an absolute figure. Correlation Vs Causation Correlation and causes are the most misunderstood term in the field statistics. I have seen many people use this
medium
2,024
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. term interchangeably. It is so much important to understand the nitty-gritty details about the confusing terms. You might have heard about the popular term in statistics:- “Correlation does not imply causation” This phrase used in statistics to emphasize that a correlation between two variables
medium
2,025
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. does not imply that one causes the other. Let's take the above example. As per the study, there is a correlation between sunburn cases and ice cream sales. But that does not mean one causes another. There could be the third factor that might be causing or affecting both sunburn cases and ice cream
medium
2,026
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. sales. Yes, you guessed it right. It’s the summer weather that causes both the things but remember increasing or decreasing sunburn cases does not cause anything on sales of the ice-cream. Conclusion So we have covered pretty much everything that is necessary to measure the relationship between
medium
2,027
Statistics, Machine Learning, Data Science, Artificial Intelligence, Deep Learning. random variables. I have also added some extra prerequisite chapters for the beginners like random variables, monotonic relationship etc. Hope I have cleared some of your doubts today. Thanks for reading. See you soon with another post! References Statistics How To: https://www.statisticshowto.com/
medium
2,028
Books, Business, Gamification. Understanding Gamification What is Gamification? Gamification uses game mechanics and dynamics in non-game settings to engage users and solve problems. It’s about making tasks more enjoyable and rewarding, leading to increased motivation and better outcomes. Example: Duolingo uses gamification to
medium
2,030
Books, Business, Gamification. make language learning fun and engaging by incorporating points, streaks, and levels. Key Elements of Gamification 1. Points, Badges, and Leaderboards (PBL): Points: Points provide immediate feedback and a sense of accomplishment. They can be used to reward users for completing tasks, making
medium
2,031
Books, Business, Gamification. progress, or achieving milestones. Example: LinkedIn awards points for profile completion, endorsements, and connections, encouraging users to stay active and engaged. Badges: Badges recognize and celebrate achievements, offering users a sense of pride and recognition. They can be displayed
medium
2,032
Books, Business, Gamification. publicly to showcase accomplishments. Example: Foursquare uses badges to reward users for check-ins and exploring new places, creating a sense of adventure and achievement. Leaderboards: Leaderboards introduce a competitive element, motivating users to improve their performance to rank higher. They
medium
2,033
Books, Business, Gamification. foster a sense of community and drive engagement through competition. Example: Fitbit’s leaderboards show how users rank against their friends in terms of steps taken, fostering friendly competition and motivation. Enhancing User Engagement 2. Narrative and Storytelling: Creating a Compelling
medium
2,034
Books, Business, Gamification. Narrative: A compelling story can transform a mundane task into an exciting journey. Narratives provide context and meaning, making the user experience more immersive and engaging. Example: Nike Run Club uses storytelling in guided runs, where users follow narratives led by professional athletes,
medium
2,035
Books, Business, Gamification. making each run more inspiring. At Brainlighter, we save you 5000 hours by packing knowledge into actionable growth hacks you can apply in your life. Get the app here: 👉👉👉 Brainlighter 3. Progression and Feedback: Progression Systems: Progress bars, levels, and milestones give users a sense of
medium
2,036
Books, Business, Gamification. advancement and achievement. These systems help users see their progress and stay motivated to continue. Example: Language learning apps like Babbel use levels and progress bars to show learners how far they’ve come and what they need to achieve next. Immediate Feedback: Providing immediate
medium
2,037
Books, Business, Gamification. feedback keeps users engaged and informed about their performance. Real-time feedback can come through notifications, progress updates, or instant rewards. Example: Khan Academy provides instant feedback on quizzes and exercises, helping learners understand their mistakes and learn more
medium
2,038
Books, Business, Gamification. effectively. Boosting User Retention 4. Social Interaction and Community Building: Fostering Community: Social features like forums, group challenges, and peer feedback build a sense of community and belonging. Users are more likely to stay engaged when they feel part of a group. Example: Strava’s
medium
2,039
Books, Business, Gamification. social features allow athletes to follow friends, join clubs, and participate in group challenges, enhancing the sense of community. 5. Scarcity and Urgency: Creating Scarcity: Limited-time offers, exclusive rewards, and time-bound challenges create a sense of urgency and drive user action.
medium
2,040
Books, Business, Gamification. Scarcity can motivate users to act quickly to avoid missing out. Example: E-commerce sites often use flash sales and limited-time discounts to encourage quick purchases. Driving Innovation 6. Empowerment and Creativity: Empowering Users: Allowing users to customize their experience and create
medium
2,041
Books, Business, Gamification. content fosters a sense of ownership and creativity. Empowered users are more engaged and invested in the product. Example: Minecraft empowers players to build and create their worlds, offering endless possibilities and fostering creativity. 7. Continuous Learning and Improvement: Encouraging
medium
2,042
Books, Business, Gamification. Learning: Gamification can promote continuous learning and improvement by rewarding users for acquiring new skills and knowledge. Example: Duolingo uses gamification to encourage daily practice and continuous learning in language acquisition. Mini FAQ: Key Insights from “” 🌟 What is gamification?
medium
2,043
Books, Business, Gamification. Gamification uses game mechanics and dynamics in non-game settings to make tasks more enjoyable and rewarding, leading to increased motivation and better outcomes. How can points, badges, and leaderboards (PBL) enhance engagement? Points: Provide immediate feedback and a sense of accomplishment,
medium
2,044
Books, Business, Gamification. encouraging users to stay active. Badges: Recognize and celebrate achievements, offering users pride and recognition. Leaderboards: Introduce competition, motivating users to improve their performance and engage more. Why is narrative and storytelling important? A compelling story can transform
medium
2,045
Books, Business, Gamification. mundane tasks into exciting journeys, making the user experience more immersive and engaging. How do progression systems and immediate feedback help? Progression Systems: Show advancement and achievement, motivating users to continue. Immediate Feedback: Keeps users engaged by providing real-time
medium
2,046
Books, Business, Gamification. updates on performance. What role does social interaction play in user retention? Social features like forums, group challenges, and peer feedback create a sense of community and belonging, encouraging users to stay engaged. How does scarcity and urgency drive user action? Limited-time offers and
medium
2,047
Books, Business, Gamification. exclusive rewards create urgency, motivating users to act quickly to avoid missing out. How can empowerment and creativity boost engagement? Allowing users to customize their experience and create content fosters ownership and creativity, making them more invested in the product. Why is continuous
medium
2,048
Books, Business, Gamification. learning important? Gamification can promote continuous learning by rewarding users for acquiring new skills and knowledge, ensuring ongoing engagement and improvement. What’s the key takeaway from the book? By incorporating gamification elements like PBL, narratives, progression, feedback, social
medium
2,049
Books, Business, Gamification. interaction, scarcity, empowerment, and continuous learning, you can transform your product to enhance engagement, boost retention, and drive innovation. Gamification can transform your product by enhancing user engagement, boosting retention, and driving innovation. Growth Hackers,
medium
2,050
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. Understanding LoRA — Low Rank Adaptation For Finetuning Large Models Math behind this parameter efficient finetuning method Source — Image generated using DALLE-3. Prompt: a smaller robot shaking hands with a bigger robot. The smaller robot is purple, lightning, active and energetic, while the
medium
2,052
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. bigger robot is frozen in ice and gray. Fine-tuning large pre-trained models is computationally challenging, often involving adjustment of millions of parameters. This traditional fine-tuning approach, while effective, demands substantial computational resources and time, posing a bottleneck for
medium
2,053
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. adapting these models to specific tasks. LoRA presented an effective solution to this problem by decomposing the update matrix during finetuing. To study LoRA, let us start by first revisiting traditional finetuing. Decomposition of ( Δ W ) In traditional fine-tuning, we modify a pre-trained neural
medium
2,054
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. network’s weights to adapt to a new task. This adjustment involves altering the original weight matrix ( W ) of the network. The changes made to ( W ) during fine-tuning are collectively represented by ( Δ W ), such that the updated weights can be expressed as ( W + Δ W ). Now, rather than
medium
2,055
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. modifying ( W ) directly, the LoRA approach seeks to decompose ( Δ W ). This decomposition is a crucial step in reducing the computational overhead associated with fine-tuning large models. Traditional finetuning can be reimagined us above. Here W is frozen where as ΔW is trainable (Image by the
medium
2,056
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. blog author) The Intrinsic Rank Hypothesis The intrinsic rank hypothesis suggests that significant changes to the neural network can be captured using a lower-dimensional representation. Essentially, it posits that not all elements of ( Δ W ) are equally important; instead, a smaller subset of
medium
2,057
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. these changes can effectively encapsulate the necessary adjustments. Introducing Matrices ( A ) and ( B ) Building on this hypothesis, LoRA proposes representing ( Δ W ) as the product of two smaller matrices, ( A ) and ( B ), with a lower rank. The updated weight matrix ( W’ ) thus becomes: [ W’ =
medium
2,058
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. W + BA ] In this equation, ( W ) remains frozen (i.e., it is not updated during training). The matrices ( B ) and ( A ) are of lower dimensionality, with their product ( BA ) representing a low-rank approximation of ( Δ W ). ΔW is decomposed into two matrices A and B where both have lower
medium
2,059
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. dimensionality then d x d. (Image by the blog author) Impact of Lower Rank on Trainable Parameters By choosing matrices ( A ) and ( B ) to have a lower rank ( r ), the number of trainable parameters is significantly reduced. For example, if ( W ) is a ( d x d ) matrix, traditionally, updating ( W )
medium
2,060
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. would involve ( d² ) parameters. However, with ( B ) and ( A ) of sizes ( d x r ) and ( r x d ) respectively, the total number of parameters reduces to ( 2dr ), which is much smaller when ( r << d ). The reduction in the number of trainable parameters, as achieved through the Low-Rank Adaptation
medium
2,061
Large Language Models, Machine Learning, Deep Learning, Data Science, Technology. (LoRA) method, offers several significant benefits, particularly when fine-tuning large-scale neural networks: Reduced Memory Footprint: LoRA decreases memory needs by lowering the number of parameters to update, aiding in the management of large-scale models. Faster Training and Adaptation: By
medium
2,062