text
stringlengths 301
426
| source
stringclasses 3
values | __index_level_0__
int64 0
404k
|
---|---|---|
Physics, Physicist, Quantum Mechanics.
392–399. 1963. Dirac, P. A. M., A Remarkable Representation of the 3 + 2 de Sitter Group, J. Math. Phys. [4] 901–909. 2020. Kim, Y. S. and Noz, M. E, Integration of Dirac’s Efforts to Construct a Quantum Mechanics which is Lorentz-covariant, Symmetry [12(8)], 1270 (1–30). 2021. Baskal, S., Kim, Y. | medium | 1,210 |
Physics, Physicist, Quantum Mechanics.
S., and Noz, M. E., Physics of the Lorentz Group, Second Edition: Beyond High-energy Physics and Optics (IOP Publishing: Bristol, UK). Research Objectives Professor Young Suh Kim integrates Dirac’s pivotal papers to construct quantum mechanics valid in the Lorentz-covariant world based on | medium | 1,211 |
Physics, Physicist, Quantum Mechanics.
Einstein’s special relativity which produces E = mc2. Collaborators Marilyn Noz, New York University, and Sibel Baskal of Middle East Technical University. Bio 1961, PhD in Physics from Princeton University. 1962–2007. Assistant, Associate, and Full Professor of Physics at the University of | medium | 1,212 |
.
unit of the Islamic Revolutionary Guard Corps) are both fighting on behalf of the Bashar al-Assad regime in Syria. Some (but not all) of the rebel forces they are fighting in Syria are closely affiliated with al Qaeda.If the CIA were right — that an al Qaeda group was planning to massacre civilians | medium | 1,219 |
.
how much the McClatchy story helps the intelligence community. The U.S. government has been dogged with Edward Snowden’s leaks about its pervasive surveillance programs. Then, in one fell swoop, there is a success story where the U.S. government’s vast surveillance powers do some act of unequivocal | medium | 1,225 |
.
Lebanon would be a provocative move regardless of who did it, but for an al Qaeda group to try it is downright scary. In June, an al Qaeda group clashed with the Lebanese army in Sidon, Lebanon leaving 16 people dead. Now they’re trying to blow up civilians in southern Lebanon. Will the CIA be able | medium | 1,231 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
There is a lot of buzz around using AI to analyze medical imaging, including high-profit efforts from IBM and Google. How much is hype and how much is reality? We can debate endlessly about the state of the technology, what is very clear is the need for innovation. Imaging Is The New Physical There | medium | 1,243 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
were about 600M radiology images done in the US in 2017, x-rays being 50% of it, a number that has been growing consistently over the years. Many reasons for this trend: Equipment — more and better imaging equipment Tradition — an acceptance across medical fields that imaging can lead to better | medium | 1,244 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
diagnosis and treatment Satisfaction — emphasis on patient satisfaction since patients feel they are getting better care when they get imaging done Legal — reducing risk of lawsuits because imaging can increase the certainty of a medical decision, granted that too much information is sometimes as | medium | 1,245 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
dangerous as too little The Trade Off: It’s Not Cost, It’s People But there is a big trade-off in increased imaging: cost. And we are talking especially of trained personnel to analyze the image. Radiology is the leading letter in the “ROAD to happiness” (Radiology, Ophthalmology, Anesthesia, | medium | 1,246 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
Dermatology aka four specialities perceived as having good work-life balance ) but the reality is the growth in imaging has increased workload far more than the supply of radiologists. An example metric: a Mayo study found radiologists have gone from reading 3 MRI images per minute to 12 in just a | medium | 1,247 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
decade. Sure, markets will correct themselves and we should expect more specialists in an overworked field. But medicine is not a perfect market, in fact is is quite regulated, which means supply / demand is tempered by many constraints plus it takes a decade to train the new generation from | medium | 1,248 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
medical school to a radiologist. The problem is even worse in emerging markets which are leapfrogging straight into more imaging with a severe dearth of doctors. AI As The Solution Overall I firmly believe AI will not take jobs away from doctors any time soon but actually help them much more | medium | 1,249 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
efficient and address a growing need. If you are an entrepreneur in the space or considering it you will likely have heard of efforts like Arterys working on cardiac MRI or Voxel Cloud that has been deployed widely in China. It is actually a far bigger list receiving a high amount of VC investment. | medium | 1,250 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
We obviously have many problems to solve: Explainability — We don’t always know how to explain how an AI got to a particular prediction, how will the FDA and public opinion evolve to this? Sensitivity and Specificity — What to do when the machine has false positive and false negatives? Privacy — We | medium | 1,251 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
need to figure out the right balance between protecting patient’s data but also making it available enough so algorithms can leverage learnings for other individuals. But AI is key for us to be able to not only provide better care but to provide it to more people. My wife is a radiologist at | medium | 1,252 |
Healthcare, Medical Imaging, Computer Vision, Digital Health, Healthtech.
Stanford and I interact regularly with other radiologists. These are purposely short articles focused on practical insights (I call it gl;dr — good length; did read). I would be stoked if they get people interested enough in a topic to explore in further depth. I work for Samsung’s innovation unit | medium | 1,253 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
Predicting Sequential Data using LSTM: An Introduction Predicting the future of sequential data like stocks using Long Short Term Memory (LSTM) networks. Photo by Chris Liverani on Unsplash Forecasting is the process of predicting the future using current and previous data. The major challenge is | medium | 1,255 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
understanding the patterns in the sequence of data and then using this pattern to analyse the future. If we were to hand-code the patterns, it would be tedious and changes for the next data. Deep Learning has proven to be better in understanding the patterns in both structured and unstructured | medium | 1,256 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
data. To understand the patterns in a long sequence of data, we need networks to analyse patterns across time. Recurrent Networks is the one usually used for learning such data. They are capable of understanding long and short term dependencies or temporal differences. Alright, no more intro… This | medium | 1,257 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
post will show you how to implement a forecasting model using LSTM networks in Keras and with some cool visualizations. We’ll be using the stock price of Google from yahoo finance but feel free to use any stock data that you like. Implementation I have used colab to implement this code to make the | medium | 1,258 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
visualizations easier, you can use your preferred method. We’ll start off with importing necessary libraries: After you’ve downloaded your .csv file from yahoo finance or your source, load the data using pandas. The info of dataframe shows somewhat like this: <class 'pandas.core.frame.DataFrame'> | medium | 1,259 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
RangeIndex: 3797 entries, 0 to 3796 Data columns (total 7 columns): Date 3797 non-null object Open 3797 non-null float64 High 3797 non-null float64 Low 3797 non-null float64 Close 3797 non-null float64 Adj Close 3797 non-null float64 Volume 3797 non-null int64 dtypes: float64(5), int64(1), | medium | 1,260 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
object(1) memory usage: 207.7+ KB For this tutorial, we require only Date and Close columns, everything else can be dropped. Before we do the training and predictions, let's see how the data looks like. For all the visualizations, I’m using the Plotly python library. Why Plotly?… cause its simply | medium | 1,261 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
the best graphing library and it can produce some good looking graphs. With plotly, we can define a trace and the layout and it does everything else. The graph is oscillating from 2018 and the sequence is not smooth… Moving on. Data Preprocessing For our analysis, let train the model on the first | medium | 1,262 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
80% of data and test it on the remaining 20%. Before we do the training, we need to do some major modification to our data. Remember, our data is still a sequence.. a list of numbers. The neural network is trained as a supervised model. Thus we need to convert the data from sequence to supervised | medium | 1,263 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
data 😱. Let me explain, training a neural network of any machine learning model requires the data to be in {<features>,<target>} format. Similarly, we need to convert the given data into this format. Here, we introduce a concept of a look back. Look back is nothing but the number of previous days’ | medium | 1,264 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
data to use, to predict the value for the next day. For example, let us say look back is 2; so in order to predict the stock price for tomorrow, we need the stock price of today and yesterday. Coming back to the format, at a given day x(t), the features are the values of x(t-1), x(t-2), …., x(t-n) | medium | 1,265 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
where n is look back. So if our data is like this, [2, 3, 4, 5, 4, 6, 7, 6, 8, 9] the required data format (n=3) would be this: [2, 3, 4] -> [5] [3, 4, 5] -> [4] [4, 5, 4] -> [6] [5, 4, 6] -> [7] [4, 6, 7] -> [6] [6, 7, 6] -> [8] [7, 6, 8] -> [9] Phew 😓. Luckily, there is a module in Keras that | medium | 1,266 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
does exactly this: TimeseriesGenerator. Please look up the documentation for more info. I’ve set look_back as 15, but you can play around with that value. Neural Network Now that our data is ready, we can move on to creating and training our network. A simple architecture of LSTM units trained | medium | 1,267 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
using Adam optimizer and Mean Squared Loss function for 25 epochs. Note that instead of using model.fit(), we use model.fit_generator() because we have created a data generator. To know more about LSTM network, see this awesome blog post. Prediction Now that we have completed training, let us see | medium | 1,268 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
if the network performed well. We can test the model on testing data and see if the prediction and the actual values overlap. Rather than computing loss between predicted and actual values, we can plot it. From the graph, we can see that prediction and the actual value(ground truth) somewhat | medium | 1,269 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
overlap. But if you zoom in, the fit is not perfect. We should expect this because it is inevitable as we are performing prediction. Forecasting Our testing shows the model is somewhat good. So we can move on to predicting the future or forecasting. Foreshadowing: Since we are attempting to predict | medium | 1,270 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
the future, there will be a great amount of uncertainty in the prediction. Predicting the future is easy… To predict tomorrow's value, feed into the model the past n(look_back) days’ values and we get tomorrow’s value as output. To get the day after tomorrow’s value, feed-in past n-1 days’ values | medium | 1,271 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
along with tomorrow’s value and the model output day after tomorrow’s value. Forecasting for longer duration is not feasible. So, let’s forecast a months stock price. Now plotting the future values, Predicting the future is easy… Is it? When predicting the future, there is a good possibility that | medium | 1,272 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
model output is uncertain to a great extent. The model’s output is fed back into it as input. This causes the model’s noise and uncertainty to be repeated and amplified. But still, we have created a model that gives us a trend of the graphs and also the range of values that might be in the future. | medium | 1,273 |
Machine Learning, Neural Networks, Forecasting, Data Analysis, Time Series Analysis.
Conclusion With this model, we have created a rudimentary model that is able to forecast to a certain extent. Even though the model is not perfect, we have one that can approximate to the past data pretty well. But for new data, we would require more parameter tuning. If there exists any problem | medium | 1,274 |
Integral, Derivatives, Calculus, Study, Math.
Integration often comes up when finding the average of an continuous variable. This concept is useful on its own but it can also help us explain why integrals and derivatives are inverses of each other. Lets say we have the graph of sin(x) and we are looking at it between 0 and pi, which is half of | medium | 1,276 |
Integral, Derivatives, Calculus, Study, Math.
its period. What is the average height of the graph on that interval? Before we get into answering this question, you might be wondering why this is even useful at all. Well sin waves actually are used to model many phenomenon's such as the number of hours the sun is up per day as a function of | medium | 1,277 |
Integral, Derivatives, Calculus, Study, Math.
what day of the year it is. This can be helpful, when lets say that our question is, what is the average effectiveness of solar panels in summer months vs. winter months? Source-https://www.youtube.com/watch?v=FnJqaIESC2s&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x&index=9 Here for this we want to know | medium | 1,278 |
Integral, Derivatives, Calculus, Study, Math.
the average value of the sin function for half of its period. But for this article we won’t worry about the transformations of the sin for this article but in essence its the same. Its a weird thing to think about though….the average of a continuous thing? With averages we often think about a | medium | 1,279 |
Integral, Derivatives, Calculus, Study, Math.
finite number of numbers where we can add them all up and divide by how many there are. There are infinitely number of values of sin(x) between 0 and pi, we can’t just add them all up and divide by infinity. This concept actually comes up a lot in math, where you want to add infinitely many values | medium | 1,280 |
Integral, Derivatives, Calculus, Study, Math.
associated with a continuum. The key for these problems is to use an integral somehow! A good first step is to just approximate your situation with some finite sum. Lets say we are sampling a finite number of points between our range. Since its a finite sample you can find the average by just | medium | 1,281 |
Integral, Derivatives, Calculus, Study, Math.
adding up the heights and dividing it by the number of samples. The more points we sample, the closer the average of that sample should be to the actual average of the continuous variable. This feels somewhat related taking the integral of sin(x) between 0 and pi even if it may not be clear how the | medium | 1,282 |
Integral, Derivatives, Calculus, Study, Math.
two ideas match up. For that integral, you also think of a sample of inputs on this continuum but instead of adding the height sin(x) at each one and dividing by how many there are, you add up sin(x) times dx, which dx is the spacing between the samples. This means you are adding up little areas | medium | 1,283 |
Integral, Derivatives, Calculus, Study, Math.
not heights. Additionally, the integral is not technically the sum but what the sum approaches as dx approaches 0. Now we want to reframe our sum of the heights divided by the number of heights in terms of dx, the spacing between samples. Lets say that the spacing between the points in 0.1 and we | medium | 1,284 |
Integral, Derivatives, Calculus, Study, Math.
know they range form 0 to pi so how many are there? Well you can take the length of that interval(pi) and divide it by the length of each space(0.1). An approximation for this number is 31. Pi/dx would be the number of samples which we can substitute into our expression from earlier. We can | medium | 1,285 |
Integral, Derivatives, Calculus, Study, Math.
rearrange it to put dx at the top and distribute it to the sum. Source-https://www.youtube.com/watch?v=FnJqaIESC2s&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x&index=9 But what does it mean to distribute that dx up top? It means that the terms you are adding up will look like sin(x) times dx for the | medium | 1,286 |
Integral, Derivatives, Calculus, Study, Math.
various inputs you are sampling. That means the numerator will be an integral expression. Source-https://www.youtube.com/watch?v=FnJqaIESC2s&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x&index=9 This means the more samples that we have, the average will approach the actual integral sin(x) between 0 and | medium | 1,287 |
Integral, Derivatives, Calculus, Study, Math.
pi all divided by the length of the interval(pi). In other words the average height of the graph is the area divided by the width. This actually makes a lot of logical sense! Now lets actually compute the answer to this expression. In order to compute an integral you need to find an antiderivative | medium | 1,288 |
Integral, Derivatives, Calculus, Study, Math.
of the function inside the integral(what function has their derivative as sin(x) ?). The derivative of cos(x) is -sin(x) so the derivative of negative cos(x) is sin(x). One was to get an intuitive understanding why this is, is that the slop of negative cos(x) matches the height of the sin(x) graph | medium | 1,289 |
Integral, Derivatives, Calculus, Study, Math.
Source-https://www.youtube.com/watch?v=FnJqaIESC2s&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x&index=9 Isn’t it crazy that the area under the sin(x) graph turns out to be exactly 2? This means that the answer to our average height of the sin(x) function turns out to be 2/pi. This example that we | medium | 1,292 |
Integral, Derivatives, Calculus, Study, Math.
explored does a great job of explaining why integrals and derivatives are inverses of each other and why the area of one graph relates to the slope of another. Notice how finding the average value 2/pi came down to looking at the change in the antiderivative. Another way to think about that | medium | 1,293 |
Integral, Derivatives, Calculus, Study, Math.
fraction is as the rise over run slope between the point of the antiderivative graph below 0 and the point of that graph above pi. Now this leaves ups with the question why would that slope represent an average value of sin(x) on that region? By definition sin(x) is the derivative of the | medium | 1,294 |
Integral, Derivatives, Calculus, Study, Math.
antiderivative graph, it gives the slope of negative cos(x) at every point. This means another way to think about the average value of sin(x) is as the slope over all tangent lines between 0 and pi. This makes it clear why the average slope of a graph over all of its certain point in a range should | medium | 1,295 |
Integral, Derivatives, Calculus, Study, Math.
https://www.youtube.com/watch?v=FnJqaIESC2s&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x&index=9 You can think of this, as the area under the graph divided by its width or the signed area under the graph since any area below the x-axis is considered negative. Now lets go back and see how this relates to | medium | 1,297 |
Integral, Derivatives, Calculus, Study, Math.
taking a finite average, where you add up many numbers and divide by how many there are. When you take some sample of points spaced out by dx, the number of samples is equal to the length of the interval, divided by dx. This means that if you add up the values of f(x) at each sample and divide by | medium | 1,298 |
Integral, Derivatives, Calculus, Study, Math.
the total number of samples, it is the same as adding up the product f(x) times dx and dividing by the width of the entire interval. The only difference between this equation and the integral is that the integral asks what happens as dx approaches 0, but that just corresponds to samples with more | medium | 1,299 |
Integral, Derivatives, Calculus, Study, Math.
and more points. In the end, evaluating any integral comes down to finding the antiderivative of f(x). What we want is the change of the antiderivative between a and b. Which can be thought of as the change of height of the new graph between the two bounds. This means that the solution to the | medium | 1,300 |
Integral, Derivatives, Calculus, Study, Math.
average problem is the change in height of this new graph divided by the change in the x value between a and b. In other words, it is the slope of the antiderivative graph between the two end points. So why are antiderivatives the key to solving integrals? Well when you reframe the question of | medium | 1,301 |
Integral, Derivatives, Calculus, Study, Math.
finding the average continuous value instead as finding the average slope of a bunch of tangent lines, it lets you see the answer by just comparing end points rather than having to count all of the points in-between. If there ever is some concept you understand in a finite context, which involves | medium | 1,302 |
Data Science, Database, Python, String Methods, Data Analytics.
Is A. Mitchell McConnell, Jr. The Same Person as Mitch McConnell? Names are among the most common ways of identifying people: you probably don’t know your best friend’s social security number, and you don’t use your fingerprint to claim your Starbucks order. But names can be messy. Different people | medium | 1,304 |
Data Science, Database, Python, String Methods, Data Analytics.
may share the same name, and the same person may go by a first or middle name, a nickname, initials, or a title or honorific such as “Doctor.” For these reasons, names are seldom the best option for matching records between datasets. Sometimes, however, we may not have a unique identifier that can | medium | 1,305 |
Data Science, Database, Python, String Methods, Data Analytics.
be used to match records across sources. Matching names from one source to another may be the best available option. And it’s an important problem: thoughtful record linkage can add tremendous value to data by providing ways to interface with other public or proprietary data sources. Data Can’t | medium | 1,306 |
Data Science, Database, Python, String Methods, Data Analytics.
Tell Us Whether Congressional Stock Trading is Corrupt Sometimes Transparency Isn’t Enoughinnerjoin.bit.io We encountered this issue while working on our article on congressional stock trades: we might be able to tell that “Mitch McConnell” is the same person as “A. Mitchell Mcconnell, Jr.” | medium | 1,307 |
Data Science, Database, Python, String Methods, Data Analytics.
(especially when we know there’s only one Mitch McConnell in Congress), but we can’t just use a SQL join with those two names as keys. This analysis would have been impossible without a way to match the members of congress across datasets, so we needed a way to match their names. Figure 1: Matching | medium | 1,308 |
Data Science, Database, Python, String Methods, Data Analytics.
names across datasets is not a trivial problem when those names are formatted in different ways and when there are many potential matches to choose from. In this post, we describe how we used a metric called the Levenshtein Distance to quickly identify the “best match” for each name between the two | medium | 1,309 |
Data Science, Database, Python, String Methods, Data Analytics.
datasets and to identify the subset of names that required some manual review. Want to try it yourself? We put together a Deepnote dashboard where you can enter two strings and receive a summary of several different popular string similarity metrics, including the Levenshtein distance, for those | medium | 1,310 |
Data Science, Database, Python, String Methods, Data Analytics.
two strings. Name Matching between Congressional Datasets In our project on congressional stock trading, we needed to obtain committee assignment data from the ProPublica Congress API and stock trading data from the senate/house stock watchers sites. As there was no unique key for merging these | medium | 1,311 |
Data Science, Database, Python, String Methods, Data Analytics.
datasets, we needed to match the members of congress by name. This wasn’t a trivial problem: the names had different formats between the two datasets, and one member of congress could be represented by more than one name in the stock watchers data. Manually matching a list of 158 names from the | medium | 1,312 |
Data Science, Database, Python, String Methods, Data Analytics.
stock watchers data against 549 distinct names from the ProPublica API would be tedious, time consuming, and susceptible to errors, but we lacked unique and matching keys that could directly link the two datasets. The Levenshtein Distance Instead of manually matching the names, for each name in the | medium | 1,313 |
Data Science, Database, Python, String Methods, Data Analytics.
stock watchers list, we computed a metric called the Levenshtein Distance (sometimes referred to as the edit distance) to each name in the ProPublica data. The Levenshtein Distance between two strings is the number of single-character insertions, deletions, and substitutions — the number of “edits” | medium | 1,314 |
Data Science, Database, Python, String Methods, Data Analytics.
— needed to transform one of the strings into another. We normalized the Levenshtein distance between each pair of names to a score between zero and one, with one representing an identical match and zero representing no match. Figure 2: Converting Harry to Harvey results in a Levenshtein distance | medium | 1,315 |
Data Science, Database, Python, String Methods, Data Analytics.
of 2 (with a normalized similarity score of 0.67). It requires one replacement (r → v) followed by one insertion (e). For example, matching “Ronald L Wyden” to “Ron Wyden” requires deleting five characters (including a space): “ald L”, giving a Levenshtein Distance of five. Normalizing this returns | medium | 1,316 |
Data Science, Database, Python, String Methods, Data Analytics.
a similarity score of 0.64. The distance from Ron Estes to Ron Wyden, on the other hand, requires replacing “Est” and “s” with “Wyd” and “n,” for a distance of 4. Once normalized, we obtain a score of 0.56, leading us to (correctly) suppose that “Ron Wyden” is a better match to “Ronald L Wyden” | medium | 1,317 |
Data Science, Database, Python, String Methods, Data Analytics.
than to “Ron Estes.” We found the best match — the name with the highest normalized similarity score — for each name and manually reviewed these matches for errors, paying special attention to those with comparatively low similarity scores. The Levenshtein Similarity Correctly Matched All Senator | medium | 1,318 |
Data Science, Database, Python, String Methods, Data Analytics.
Names Figure 2 shows the normalized Levenshtein similarities between a selection of the senators whose names were the most similar to “A. Mitchell Mcconnell, Jr.” in the Stock Watchers Data. McConnell’s name, in this format, did not match particularly well to any of the names in the ProPublica | medium | 1,319 |
Data Science, Database, Python, String Methods, Data Analytics.
data, but “Mitch McConnell” was still the best match. In fact, this approach successfully linked all of the senator names from the Stock Watchers dataset to their correct matches in the ProPublica data. Figure 3: “A. Mitchell Mcconnell, Jr.” did not match any of the names in the ProPublica dataset | medium | 1,320 |
Data Science, Database, Python, String Methods, Data Analytics.
particularly well, but “Mitch McConnell” was the best match with a normalized Levenshtein similarity of 0.54. Some of the House of Representative Names Required Manual Matching The Levenshtein Similarity approach was not able to correctly map all of House Stock Watchers names to their ProPublica | medium | 1,321 |
Data Science, Database, Python, String Methods, Data Analytics.
counterparts. For the majority of names, we quickly confirmed that the name with the greatest Levenshtein similarity was the correct match. We paid particularly close attention to names with low Levenshtein similarity scores, and manually mapped the incorrect matches to their correct ProPublica | medium | 1,322 |
Data Science, Database, Python, String Methods, Data Analytics.
data counterparts. Overall, this process took just a few minutes instead of the hours it likely would have taken to manually match each of the names. Figure 3 shows several of the close matches for representative Neal Patrick Dunn, one of the misclassified members of the House. Figure 4: “Sean | medium | 1,323 |
Data Science, Database, Python, String Methods, Data Analytics.
Patrick Maloney” was incorrectly identified as the match for “Neal Patrick Dunn” because the perfect middle name match. Even though the closest match wasn’t correct, it was very easy to look through the names ranked by Levenshtein similarity to locate the correct match quickly. Try it Yourself We | medium | 1,324 |
Data Science, Database, Python, String Methods, Data Analytics.
chose to use the Levenshtein distance because it could straightforwardly handle names with different lengths, provided a useful similarity measure even when no characters were in the exact same positions between the two strings, and was conveniently implemented in a number of software packages (we | medium | 1,325 |
Data Science, Database, Python, String Methods, Data Analytics.
recommend jellyfish in python, StringDistances.jl in Julia, and stringdist in R). There are plenty of other methods for approximate string matching that vary in, for example, how they weight matches at the beginning of a string or how stringent they are about matching characters that appear in | medium | 1,326 |
Data Science, Database, Python, String Methods, Data Analytics.
different positions between strings. We put together a Deepnote dashboard where you can enter two strings and receive a summary of several different popular string similarity metrics for those two strings. Compare More Strings with the Deepnote Dashboard. We’ve also made the lists of congress | medium | 1,327 |
Data Science, Database, Python, String Methods, Data Analytics.
member names, along with the mapping to their correct matches, available on bit.io. We correctly matched all of the Senate names and all but seven of the House names from the stock watcher data to their counterparts in the ProPublica datasets using the Levenshtein distance (a successful match rate | medium | 1,328 |
Self Improvement, Communication.
Listening. Listening. Listening. I know how important it is, but I also know how hard I sometimes find to truly listen. I guess I’m not unique when I miss half of what the other person is saying because I’m so preoccupied with what I’m going to say in response. This realisation prompted me to read | medium | 1,330 |
Self Improvement, Communication.
The Art of Active Listening by Josh Gibson and Fynn Walker. These are my key takeaways from reading this book: What is active listening? The difference between “active listening” and “normal listening” was my first learning from reading “The Art of Active Listening”. The authors of the book, Josh | medium | 1,331 |
Self Improvement, Communication.
Gibson and Fynn Walker, make it pretty clear from the outset that there are only two communication states: actively listening, and not really listening. Gibson and Walker then go on to explain that active listening is the art of listening for meaning; active listening requires you to understand, | medium | 1,332 |
Self Improvement, Communication.
interpret, and evaluate what you’re being told. With active listening, your attention should be on the speaker. This means that whenever you feel an inner urge to say something, to respond, try to stop this urge and instead concentrate on what’s being said. Just to give you a personal example from | medium | 1,333 |
Self Improvement, Communication.
how this urge often manifests itself when I listen: Speaker: “So we decided to do X, Y, Z.This felt like the best approach, because … Me — thinking: “Why did they decide to do XYZ, that doesn’t make sense!” — Thus completely ignoring the “because” part of the speaker’s statement It’s easy to see | medium | 1,334 |
Self Improvement, Communication.
from this example how people like me run the risk of missing critical bits of a conversation, purely because the focus is on the response instead of on listening actively. Importance of active listening In the book, Gibson and Walker explain why it’s so important to actively listen: Active | medium | 1,335 |
Self Improvement, Communication.
listening encourages people to open up. Active listening reduce the chance of misunderstandings. Active listening helps to resolve problems and conflicts. Active listening builds trust. To me, active listening is the key to empathy and relationship building. I liked Gibson and Walker’s simple | medium | 1,336 |
Self Improvement, Communication.
breakdown of human communication: “In simple terms, speaking is one person reaching out, and listening is another person accepting and taking hold. Together, they form communication, and this is the basis of all human relationships.” 7 common barriers to active listening Learning about the seven | medium | 1,337 |
Self Improvement, Communication.
common barriers to active listening was my biggest takeaway from “The Art of Active Listening”. In the book, Gibson and Walker point out the typical barriers that most of us deal with when listening: Your ignorance and delusion — The first barrier to active listening is simply not realising that | medium | 1,338 |
Self Improvement, Communication.
listening isn’t taking place. Gibson and Walker make the point that most of us can get through life perfectly well without developing our listening skills, deluding ourselves that listening just involves allowing another person to speak in our presence. Your reluctance — When you actively listen to | medium | 1,339 |
Self Improvement, Communication.
another person, it may be that you become involved in their situation in some way. There might be instances where you’re reluctant to get involved and as a result fail to lend a sympathetic and understanding ear. Your bias and prejudice — Your personal interpretation of what you’re hearing may | medium | 1,340 |
Self Improvement, Communication.
cause you to respond negatively to the speaker. You either assume that you know the situation because you’ve had a similar experience in the past or you allow your preconceptions to colour the way you respond. Your lack of interest — You may simply not be interested in what the speaker is saying. | medium | 1,341 |
Self Improvement, Communication.
We all know that this can happen when you feel the conversation topic is uninspiring. Your opinion of the speaker — Your opinion of the speaker, as a person, may influence the extent to which you’re happy to pay attention and give your time to the speaker. Often when you don’t like the speaker, | medium | 1,342 |
Self Improvement, Communication.
this is likely to affect your desire to listen to the speaker. I’ve also noticed how in certain places, the status of the speaker has a big influence on whether he or she is being listened to. In these places, the CEO tends to be listened to automatically, whereas ‘people of lower rank’ might | medium | 1,343 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.