text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. missing data, classification difficulties, erroneous, misreported data, among others. On top of these issues, regulatory requirements enforce to preserve anonymity when analysing the dataset. In 1993 Donald Rubin, author of the book Statistical Analysis with Missing Data, had the original idea of
medium
8,308
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. fully synthetic data for privacy-preserving statistical analysis. He originally designed this to synthesize the Census long form responses for the short form households. He then released samples that did not include any actual long form records - in this he preserved anonymity of the household.
medium
8,309
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. Later that year, the idea of original partially synthetic data was created by J. A. Little. He used this idea to synthesize the sensitive values on the public use file. [1] So, what is synthetic data anyway? and what is the difference with Production data? Production data is "information that is
medium
8,310
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. persistently stored and used by professionals to conduct business processes." Meanwhile, synthetic data is "any production data applicable to a given situation that is not obtained by direct measurement". In simple terms, synthetic data is data generated by computers under certain rules. Data
medium
8,311
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. confidentiality Synthetic data is used in a variety of fields as a filter for information that would otherwise compromise the confidentiality of particular aspects of the data. Synthetic data protects privacy of users. And also can be useful for testing in lower environments. A usual practice is to
medium
8,312
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. refresh databases from Production into lower environments. The main issue is to preserve anonymity by synthesizing data. In many cases this needs to be run frequently when creating new environments or starting a new round of testing. Data retention Business data stored to conduct business activity
medium
8,313
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. must not be kept longer than it is necessary and must be disposed of appropriately. One of the techniques is to de-identify data or synthesize it. So, why to de-identify data? De-identification means that a person’s identity is no longer apparent or cannot be reasonably ascertained from the
medium
8,314
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. information or data and it helps to meet Privacy Act obligations while building trust in your data governance practices. De-identification involves two steps. The first is the removal of direct identifiers. The second is taking one or both of the following additional steps: the removal or
medium
8,315
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. alteration of other information that could potentially be used to re-identify an individual, and/or the use of controls and safeguards in the data access environment to prevent re-identification. And, how do you create synthetic data? There are different techniques to synthesize data for different
medium
8,316
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. cases. SMOTE: Synthetic Minority Over-sampling Technique. This is useful if your dataset is incomplete or imbalance data. ADASYN: Adaptive Synthetic sampling method. Similar to SMOTE, however this method adapts to the lack of data or lack of well-known categories within data. Data Augmentation. In
medium
8,317
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. this technique, we change existing datasets to have more cases. This is specially useful for training Machine Learning models. Variational Auto Encoder. Encoding is about converting into another form. In this technique, data will be converted into codes based on a certain distribution. Other
medium
8,318
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. usages. Machine learning Synthetic data to train machine learning models is rapidly increasing. Some benefits are: After the initial data generation iterations, it becomes easier to generate new synthetic datasets Completing categories without synthetic sampling is almost impossible by hand Perfect
medium
8,319
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. substitute for sensitive datasets Conclusion Some years ago, Big Data was the biggest trend. Nowadays, we know that accumulating plenty of data comes with its own risks. Bigger dataset bounties for hackers are something to consider on the Big Data trade-offs. Achieving a balance of data utility
medium
8,320
Machine Learning, Data Science, Synthetic Data, Data Architecture, Data. versus compliance is challenging. The more we squeeze the data, the more compliance challenges we get. Synthetic data is a way to achieve this balance. Another solution in the tool kit to mitigate the risk of data breaches for traditional datasets and achieving data augmentation to train machine
medium
8,321
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. Raphaela Sapire providing insight into the venture-capital funding environment for Deep Learning start-ups As detailed by my notes in our study group’s GitHub repo here, last month we convened to wrap up our coverage of Stanford’s CS231n. This course, led by Fei-Fei Li, but primarily taught in 2016
medium
8,323
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. by Andrej Karpathy and Justin Johnson, focused on the Deep Learning algorithms that enable contemporary machine-vision applications like self-driving cars and the face recognition tools now commonplace in, for example, Apple’s operating systems and Facebook. The most amusing use of these approaches
medium
8,324
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. I’ve come across yet is the “Quick, Draw!” game —you can play it for free here. Katya Vasilaky explicating her research into L2 regularisation This was our third of three sessions covering CS231n, with our attention turning to the final few lectures of the course. We primarily explored: the use of
medium
8,325
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. convolutional neural networks for motion (i.e., video) categorisation and description unsupervised learning techniques facilitated by Deep Learning algorithms autoencoders (i.e., the traditional variant once commonly used to generate features for supervised models, as well as the variational
medium
8,326
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. variety which crosses deep learning with Bayesian statistics to generate samples of, say, images) Restricted Boltzmann Machines: greedy autoencoders trained one layer at a time due to the processing constraints of the mid-2000s Generative Adversarial Networks: popularised by Ian Goodfellow at NIPS
medium
8,327
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. in 2014 and since making vast technological leaps year-over-year, GANs use two neural networks — one to generate ersatz samples and the other to evaluate their similarity to real samples — to create convincing simulations of images Jon Krohn prattling on about Generative Adversarial Networks In
medium
8,328
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. addition to wrapping up CS231n, we were delighted to hear presentations from two of our study group members: Raphaela Sapire on her experience as a venture capitalist at Blue Seed Capital, particularly her insight into the machine- and deep-learning start-up market (slides here) Katya Vasilaky on
medium
8,329
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. her research into L2 Regularization, the popular method to avoid overfitting in a wide range of models, including the deep-learning variety (slides here) Our “Group Chat” comes in an analogue format Since this February meeting, we’ve been delving into another 2016 Stanford course, this one Richard
medium
8,330
Machine Learning, Artificial Intelligence, Machine Vision, Venture Capital, Entrepreneurship. Socher’s CS224d on Deep Learning for Natural Language Processing. I’ll have two blog posts up on that — including comprehensive notes — in the next week(-ish). With an increasing emphasis on Deep Learning, I provide data science resources here.
medium
8,331
. The Computer Security Threat From Ultrasonic Networks Computer speakers can broadcast at frequencies nobody can hear. Now security experts have shown how malicious attackers can exploit this to circumvent even the best laid security measures It’s easy to imagine that computer security experts have a
medium
8,332
. Institute for Communication, Information Processing and Ergonomics in Germany, reveal an entirely new way to attack computer networks and steal information without anybody knowing. The new medium of attack is sound. And these guys have already created and tested a covert communications network that
medium
8,334
. ordinary corridor in their labs. The transmissions generally have to be along lines of sight and so aren’t perfect. The transmissions can be blocked by furniture, doors and so on. Hanspach and Goetz say that even people walking around the office can block the line of sight connection and so have an
medium
8,338
. logging app which monitors the keystrokes on the laptop and encodes this information for acoustic transmission. The experiment to test this covert communications system was remarkably successful. They ‘infected’ five laptops which they placed in various rooms adjoining the corridor in their lab and
medium
8,341
. Hanspach and Goetz. This work should allow computer security experts to plug this gap and prevent malicious attackers from ever exploiting this kind of channel. Unless, of course, the attackers are doing it already. Ref: arxiv.org/abs/1406.1213 : On Covert Acoustical Mesh Networks in Air Follow the
medium
8,348
Monte Carlo, Python, Jupyter Notebook. The first time I came across Monte Carlo simulation was during a Six Sigma training and it was used to predict the outcome of a transfer function in a tolerance analysis. It was in an Excel based tool but the principle was fairly easy to understand: assign a distribution to variables in a function
medium
8,350
Monte Carlo, Python, Jupyter Notebook. and on random base select values from those distributions and record the output of the function. Repeat that 10 000 times and you will get a distribution of the resulting function and can debate on the tolerances for that outcome. As I keep on improving my Python skills, I have been wondering how
medium
8,351
Monte Carlo, Python, Jupyter Notebook. to code it. I came across this article and decided to adapt it to my uderstanding: How to Use Monte Carlo Simulation to Help Decision Making Using Monte Carlo Simulation to Make Real Life Decisionstowardsdatascience.com Setting up the Jupyter notebook I have decided to use Jupyter notebook to
medium
8,352
Monte Carlo, Python, Jupyter Notebook. create my code for Monte Carlo simulation. First step is to import all the required libraries: import random import matplotlib.pyplot as plt import seaborn as sns Transfer function definition For my purpose I have selected a very simple function given by the following equation: y = 20A + 5B The
medium
8,353
Monte Carlo, Python, Jupyter Notebook. implementation in Python: # transfer function definition def transfer_function(A,B): return 20*A+5*B Monte Carlo function definition Now the time for Monte Carlo function definition. As mentioned in the introduction the idea is to randomly select the value out of a distributions assigned to
medium
8,354
Monte Carlo, Python, Jupyter Notebook. variables in transfer function and then calculate the value of the function. For this example a normal distribution will be assigned to both variables A and B. Here is the piece of code: # Monte Carlo simulation function def Monte_Carlo(transfer_function, iterations): final_results = [] for n in
medium
8,355
Monte Carlo, Python, Jupyter Notebook. range(iterations): a = random.gauss(200, 33) b = random.gauss(50, 5) final_results.append(transfer_function(a,b)) return final_results The number of iterations is controlled via variable iterations. For this example, variable a has been assigned normal distribution with mean value of 200 and a
medium
8,356
Monte Carlo, Python, Jupyter Notebook. standard deviation of 33. Variable b has been assigned normal distribution with mean value of 50 and a standard deviation of 5. Let’s check how the number of iterations influences the result of a Monte Carlo simulation. Running the simulation Once all is defined the Monte Carlo simulation is one
medium
8,357
Monte Carlo, Python, Jupyter Notebook. line of code. In this example simulation with 10 000 iterations has been saved in the variable k, and the next one with just 100 iterations has been saved in the variable l. Please see the code below: # number of iterations iterations = 10000 k = Monte_Carlo(transfer_function,
medium
8,358
Monte Carlo, Python, Jupyter Notebook. iterations=iterations) l = Monte_Carlo(transfer_function, iterations=100) The comparison of the results are shown on the following plot: Result of the Monte Carlo simulation on a simple function The code block for the plot: fig = plt.figure(figsize=(10,5)) sns.distplot(k, label="10000 iter")
medium
8,359
Monte Carlo, Python, Jupyter Notebook. sns.distplot(l, label="100 iter") plt.legend() plt.title('Monte Carlo Simulation'); Other distributions For Monte Carlo simulation it is often necessary to use another distributions besides the normal one. Here is the selection of the ones that are frequently used and available in the random
medium
8,360
Monte Carlo, Python, Jupyter Notebook. package: # uniform distribution between a and b random.uniform(a, b) # triangular distribution random.triangular(low=0.0, high=1.0) # lognormal distribution random.lognormvariate(mu, sigma) # Weibull distribution random.weibullvariate(alpha, beta) Notebook code on Github The full notebook is
medium
8,361
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. In personal mobility, we need a similar revolution as what transformed and propelled personal communication. Question is how to combine, fuse what is inevitable (reduce a car’s eco-footprint) and what’s pleasurable/practical. Former Apple CEO and co-founder Steve Jobs used to say: “design is not
medium
8,363
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. just what it looks like, it’s how it works”. Can we learn from him? So far, thousands of experts, journalists, bloggers etc. have been writing about tackling our favorite mode of personal transport, the automobile, for the sake of solving the major challenges that lie ahead of humanity. So far, no
medium
8,364
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. one bothered to address what would be an alternative (how may it look like and function) that is able to lure, to start with, all those who crave for a more sustainable way of getting them from A to B. Here is mine. Car becomes a household’s Nr 1 electrical ‘appliance’ There’s a difference between
medium
8,365
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. selling cars as big as possible as they yield more profits, and be a provider of efficient personal mobility, which is a matter of how we utilize public space, energy, materials. For starters, the bigger the car, the bigger the battery.
medium
8,366
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. https://www.wired.com/story/the-earth-is-begging-you-to-accept-smaller-ev-batteries/ Car needs a reformat. Climate Change and environmental concerns should bring us to look for less wasteful light electric vehicles (micro-cars) — it is a promising market that can be enlarged by introducing a
medium
8,367
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. vehicle that’s a ‘best of both’. Car-like safety and comfort for 3 passengers PLUS the economy-agility-fun of a trike makes for a whole new proposition in the car market. Bonus is driverless! *comfort also a matter of wheelbase | average ride-hail occupancy between 1.2 and 1.4 person. The thing
medium
8,368
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. with downsizing: too small a car and people may opt for a big car next to the tiny one. At the other end of the spectrum, the SUV becomes increasingly contested in major cities is the prediction. For Whom? 1 in 900 suffices… Challenge is to reduce vehicle size and mass, but without bringing a
medium
8,369
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. ‘SMALL CAR’. It may turn out to be a new industry standard. Worldwide car sales annually are around 80 million; a Next-Gen Green Car can start with luring 1 in every 900 prospective car buyers annually, more than enough for a viable business case. What if one out of 900 turns out to be one out of
medium
8,370
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. 90? An interesting option is to produce new-iSetta for OEMs, that wish to lower their corporate emissions profile (zero emissions credits used to be Tesla’s main income source) or exploit its smart-mobility capabilities. ride-hail providers (TNCs) < > (semi) Public Transport Lean Clean Green — How
medium
8,371
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. may it look like and function? A pod-like build combines downsizing, lightweighting, low drag, structural rigidity — basis for innovative, simplified, less costly production (important to keep up) and less kWh needed. Bear in mind that Tesla’s Model 3 already packs a 500 kg battery. Try taking
medium
8,372
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. apart the concept depicted below, and it loses its cohesive strength. To begin with, ever realized that a car driver conventionally seated streetside is at greater risk in a front-collision? Notice how the EV depicted below has a motorcycle plus sidecar layout with the sidecar seating integrated in
medium
8,373
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. a low-drag body. Explanatory links: driver seated curbside | passengers when broadsided | when rear-ended: rear-wheel(s) assembly provide unique safety measure The Big Bonus: Driverless $100 billion has been spent on Autonomous Vehicle (AV) development so far. Enough with the wishful tinkering.
medium
8,374
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. High time to focus on the physical thing that needs displacing, the car itself. Nuro deploys narrow-track robo pods for delivering goods. Why not extend the Nuro experience to personal mobility? If Driverless remains iffy… REFORMAT the autonomous vehicle. Why? The sleeker the AV: 1. the more nimble
medium
8,375
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. and agile it is 2. the more margin to evade 3. the easier to scan/image the car’s vicinity for the on-board AV tech and the person behind the controls 4. the less serious an accident (vehicle mass determines kinetic energy) 5. L4 autonomous, split-lane use of fwy. Below: the person behind the
medium
8,376
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. controls is seated on the side of the road where it matters most; more visibility will boost awareness. Whatever the AV tech level, other road users should never fall victim of AVs. Notice the left car’s sloping front end (click).
medium
8,377
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. https://www.iihs.org/news/detail/new-study-suggests-todays-suvs-are-more-lethal-to-pedestrians-than-cars New Mini — Beetle — Fiat 500 — iSetta Only one is more than a successor to an iconic car. If car makers lack the imagination, others should step up to the plate. new-iSetta hinges upward for
medium
8,378
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. easy boarding & exiting. Did you know that former Apple CEO Steve Jobs was already contemplating an Apple car back in 2008. We are now 15 years later… https://www.businessinsider.com/steve-jobs-was-thinking-about-an-apple-car-in-2008-2015-11?international=true&r=US&IR=T We need a similar
medium
8,379
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. revolution… as what transformed personal communication. Personal Communication went from bulky, cumbersome, boxed in to lean, lightweight, mobile and energy-efficient… Personal Mobility went quite in the opposite direction: bulky, boxy, boxed in, energy-INefficient, unsafe with the SUV trend.
medium
8,380
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. Smartphone is a great example of how new tech influences product format, and in turn how a reformat opens up whole new possibilities. Its flat, rectangular shape with rounded edges grew out to be an industry standard. It made Apple the world’s richest company. Fact is, when Apple brought the first
medium
8,381
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. iPhone in 2007, people generally weren’t whining about their mobile phones. And there it is: in Personal Mobility — a consumer market that is at least 20–50 times bigger than that of Personal Communication — there IS a lot to be desired, tackle, solve, make better! It’s time we start focusing on
medium
8,382
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. the transport mode itself. Even Urban Air Mobility (UAM) may come into the picture in a way not yet explored: Seamless 2D & 3D Personal Transit. Car and Climate You can’t have it both ways: put SUVs and trucks on batteries AND think that you actually contribute to curbing Climate Change. We already
medium
8,383
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. known the disastrous effects of SUVs on road safety and the environment (incl. more rubber dust because of those wider and bigger tires). Then again, too small a passenger vehicle and people may still opt for a ‘big bruiser’ next to the tiny one. Which will only add to having more cars. Comfort and
medium
8,384
Electric Vehicles, Apple, Silicon Valley, Autonomous Vehicles, Climate Change. NCAP safety are also a serious issue with micro-cars, not to be underestimated. Best is to have a ‘best of both’ between bicycle and car. Seen Spielberg’s 2002 SF-film Minority Report? It may help to inspire whoever wants to work with me on realizing my Next-Gen Green Car vision… This outline was
medium
8,385
Eigenvectors, Data Science, Analytics, Principal Component, Graph. An understanding about how Eigenvectors and Eigenvalues develops PCA Principle component analysis or PCA is common technique used for dimension reduction. This means that when we have too many variables in the data, we need to reduce the number of variables to be used in future statistical models
medium
8,387
Eigenvectors, Data Science, Analytics, Principal Component, Graph. to get more accurate results and avoid curse of dimentionality. In such scenarios, PCA is used to reduce number of dimensions and conduct statistical analysis (there are other methods like factor analyser and LDA) Typical steps involved in PCA is as follows: Standardization of continuous variables
medium
8,388
Eigenvectors, Data Science, Analytics, Principal Component, Graph. Generate covariance matrix Decompose the covariance matrix into eigenvectors Find eigenvalues and find the ranking of Principle components Create a feature vector to decide which principal components to keep Recast the data along the principal components axes Let’s try to understand the meaning and
medium
8,389
Eigenvectors, Data Science, Analytics, Principal Component, Graph. intuition of these steps with a bit of maths. Standardization of continuous variables Since the variables in our dataset can have different units and ranges, we standardize them so that each one of them contributes equally to the analysis. Additionally, PCA is sensitive to variance of variables.
medium
8,390
Eigenvectors, Data Science, Analytics, Principal Component, Graph. This is because if there are large differences between the ranges of initial variables, those variables with larger ranges will dominate over those with small ranges (For example, a variable that ranges between 0 and 100 will dominate over a variable that ranges between 0 and 1), which will lead to
medium
8,391
Eigenvectors, Data Science, Analytics, Principal Component, Graph. biased results. So we standardize or center the data points. Mathematically, this can be done by subtracting the mean from each value of each variable. Graphically something like this happens: Plot 1- The original data is centered Generate covariance matrix Matrices are a good way to compactly
medium
8,392
Eigenvectors, Data Science, Analytics, Principal Component, Graph. write and work with dimensional data. Since our data points can be in multiple dimension (two dimensions in the graph), we would need matrix based calculation. In PCA, a covariance matrix helps us to know if there is any relationship between variables. Covariance matrix is generated using the
medium
8,393
Eigenvectors, Data Science, Analytics, Principal Component, Graph. following formulas: Formula 2: Variance of x Formula 4: Covariance between x and y I have shown the formulas for both variance and covariance because a covariance matrix (as shown below) is a matrix which has different combination - a) a variable having variance with itself (x with x) and b)a
medium
8,394
Eigenvectors, Data Science, Analytics, Principal Component, Graph. variable having variance with other variables (x with y). So a 2x2 covariance matrix would look like this: Decompose the covariance matrix into eigenvectors Once covariance matrix is generated, we will decompose it to find eigenvectors and eigenvalues. Why? Because eigenvalues and eigenvectors
medium
8,395
Eigenvectors, Data Science, Analytics, Principal Component, Graph. helps in providing summary of a large matrix when plotted on a graph and helps us to find principle components that has compressed information of all the variables To understand eigenvectors and eigenvalues, lets dive a bit into concepts of matrices. We can transform and change matrices into new
medium
8,396
Eigenvectors, Data Science, Analytics, Principal Component, Graph. matrix by multiplying it with a vector (If a matrix has only one row or only one column it is called a vector). This is a transformed vector. Formula 5 So, when we have the covariance matrix, we multiply it with a vector in such a way that the new variables are uncorrelated and most of the
medium
8,397
Eigenvectors, Data Science, Analytics, Principal Component, Graph. information within the initial variables is squeezed or compressed into the first components. As you can see in above example (formula 5), a 2x2 matrix was transformed into single column matrix (transformed vector). So, let’s say you have 10 dimensional data, the PCA will try to compress it into a
medium
8,398
Eigenvectors, Data Science, Analytics, Principal Component, Graph. smaller dimension matrix, putting maximum possible information in the first component, then remaining information in the second component and so on until we obtain a scree plot as below: Plot 2: Scree plot These transformed components (or eigenvectors) can then be plotted on the graph having
medium
8,399
Eigenvectors, Data Science, Analytics, Principal Component, Graph. centered data points which we saw in Plot 1. Plot 3: Eigenvectors e1 and e2 shown in dotted line. Note they are perpendicular to each other Using eigenvalues to find the ranking of Principle components If you refer formula 5, you can see the eigenvalue there (i.e. 4). So, eigenvalues is the scalar
medium
8,400
Eigenvectors, Data Science, Analytics, Principal Component, Graph. that is used to transform (stretch) an Eigenvector and will also help in ranking of eigenvectors- higher the Eigenvalue more important the principle component. This can be visualized using scree plot as shown above (Plot 2). Note- It can be said that Eigenvalues are used for ranking of Principle
medium
8,401
Eigenvectors, Data Science, Analytics, Principal Component, Graph. component. The Eigenvalues ranks the Eigenvector, signifying its importance. The first ranked eigenvector is the first PC. Geometrically speaking, principal components represent the lines that capture most information of the data or maximal amount of variance. The relationship between variance and
medium
8,402
Eigenvectors, Data Science, Analytics, Principal Component, Graph. information here is that, the larger the variance carried by a line, the larger the dispersion of the data points along it, and the larger the dispersion along a line, the more the information it has. To put all this simply, just think of principal components as new axes that provide the best angle
medium
8,403
Eigenvectors, Data Science, Analytics, Principal Component, Graph. to see and evaluate the data. The animation below tries to show that. The blue dots are our datapoints. PCA tries to find maximum variance of the projection (red lines) of blue dots. If we look carefully, the maximum variance occurs when the rotating black line meets the magenta lines, and that
medium
8,404
Eigenvectors, Data Science, Analytics, Principal Component, Graph. readers, is our PCA line. Create a feature vector to decide which principal components to keep Now that we have the eigen vectors and values, we need to make a Feature vector, or simply put, a matrix of eigenvectors that are important to us. Formula 6 So, the feature vector is simply a matrix that
medium
8,405
Eigenvectors, Data Science, Analytics, Principal Component, Graph. has as columns the eigenvectors of the components that we decide to keep. Recast the data along the principal components axes Recasting the data along principle axes means that we reorient the original data solely in terms of the vectors we chose. This is done by multiplying transposed version of
medium
8,406
Eigenvectors, Data Science, Analytics, Principal Component, Graph. feature vector to transposed version of original dataset. Formula 7 Graphically speaking, our original data was along x and y axis (see plot 2) . When we do PCA, we get eigenvectors which are new perpendicular axes formed in the graph (e1 and e2 in plot 2). We reorient our data in terms of these
medium
8,407
Eigenvectors, Data Science, Analytics, Principal Component, Graph. new eigenvectors (plot 3) and we get new dataset. New points formed after reorienting the data points according to eigenvectors e1 and e2 Summary Graphical representation of all PCA steps References and Good reads- *OUCS-2002–12.pdf (otago.ac.nz) A Step-by-Step Explanation of Principal Component
medium
8,408
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. This is an abbreviated and updated version of a presentation from Ontotext’s Knowledge Graph Forum 2023 by Sumit Pal, Strategic Technology Director at Ontotext. The data ecosystem today is crowded with dazzling buzzwords, all fighting for investment dollars. A survey in 2021 found that a data
medium
8,410
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. company was being funded every 45 minutes. Data ecosystems have become jungles and in spite of all the technology, data teams are struggling to create a modern data experience. This is a “Datastrophe“. Data managed correctly can turn assets into actionable insights, left unmanaged it can be the
medium
8,411
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. uranium, causing massive damage to enterprises with data and security breaches being rampant. Drowning in Data, Thirsting for Context We’ve heard the saying, “Data, data everywhere. Not a drop of insight.” As more data accumulates, context gets diluted and lost. Most organizations today languish at
medium
8,412
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. the information layer of the DIKW pyramid, unable to make the leap to the knowledge and wisdom layer, where the real value of data is. Bad Data Tax One of the reasons for this is what we call “bad data tax”. Bad data tax is rampant in most organizations. Currently, every organization is blindly
medium
8,413
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. chasing the GenAI race, often forgetting that data quality and semantics is one of the fundamentals to achieving AI success. Sadly, data quality is losing to data quantity, resulting in “Infobesity”. Any enterprise CEO really ought to be able to ask a question that involves connecting data across
medium
8,414
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. the organization, be able to run a company effectively, and especially to be able to respond to unexpected events. Most organizations are missing this ability to connect all the data together. Sir Tim Berners-Lee Challenges in the Enterprise The mind map below shows some of the major pain points in
medium
8,415
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. mid- to big-sized organizations. In most enterprises data teams lack a data map and data asset inventory and are often unaware of data that exists across the organization, its associated profile, quality and associated metadata. Teams can’t access data to build their business use cases. Data
medium
8,416
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. duplication and data copying across business silos, and prevent organizations from linking this data across different systems to get an integrated view. Most do manual reconciliations which result in downstream challenges and reporting errors. Data Lakes, Data Catalogs, and Findability
medium
8,417
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. Organizations approach data lakes as cheap storage. They move data to data lakes creating another copy — the mantra being — “Lets move the data to a data lake and then we will figure out what to do with it”. This results in a huge findability challenge. A survey from McKinsey a few years back found
medium
8,418
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. data personas in most organizations spend 30% of their time in finding data. Proliferation of data catalogs across vendors creates metadata silos and does little to connect the data semantically. Most organizations take a myopic view of their data ecosystems and think of transactional and
medium
8,419
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. analytical as isolated systems. Additionally, there’s a lack of semantics for the data. In order for data to be useful, it needs semantics and not more tools and data. If organizations want to have a competitive advantage, their data needs to be semantically aware and semantically connected. The
medium
8,420
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. problem is not the data silos, but the disconnect they cause. As data proliferates it loses consistency, morphs its meaning, and leads to downstream challenges. Enough about challenges and problems, let’s see how one can solve these challenges. Knowledge graphs and semantic metadata Knowledge
medium
8,421
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. graphs (KGs) are the key to: Advanced Data Architecture & Models like Data Fabric, Data Mesh Unified Data Access Semantic Data Integration These fundamental capabilities of KGs enable them to bridge the chasm between information and knowledge in the DIKW pyramid. It serves a hub for data, metadata,
medium
8,422
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. and content, thus making existing data interlinked and contextualized. Today’s ecosystem doesn’t need plain metadata, it needs semantic metadata. For example, a product data tag is basic metadata. Product tags have barcodes, with numbers and symbols. However, this is undecipherable until connected
medium
8,423
Data Fabric, Data Mesh, Knowledge Graph, Data Management, Knowledge Management. to reference data that makes it machine-readable and contextually-aware. Semantic metadata opens the door to interconnecting data meaningfully, forging new experiences for data exploration and discovery, avoiding ambiguity, and improving findability. New Approaches to Data Management Over the last
medium
8,424