text
stringlengths 301
426
| source
stringclasses 3
values | __index_level_0__
int64 0
404k
|
---|---|---|
AI, Android, Vertex AI, Image Classification, Google.
https://youtube.com/shorts/i-Hi2LO4zzM?feature=share Switching The Example To Custom Trained Model In order to get image classification for Pez Dispensers working, I needed to import the custom model I downloaded from VertexAI, into the Android project. There were just a few steps involved: Import | medium | 897 |
AI, Android, Vertex AI, Image Classification, Google.
the pezimages2.tflite file to the assets directory (the same directory the a effecientnet-lite0.tflite file was located). Change the value of modelName in ImageClassifierHelper to reference the the custom model name (line 84 is now: val modelName = “pezimages2.tflite”). Small visual changes to make | medium | 898 |
AI, Android, Vertex AI, Image Classification, Google.
the app look different from the default (optional). Now that I made these simple changes, I could test my custom model. MediaPipe Example Using the Custom Model Results when using the “pezimages2.tflite” model After switching to use custom model the results were really great. The classifier | medium | 899 |
AI, Android, Vertex AI, Image Classification, Google.
successfully identified the dispensers, and could even tell the difference between “MickeyB” and “MickeyC” (an important use-case for this project). I was impressed with how quickly the classification worked, and the very high level of certainty it reported. | medium | 900 |
AI, Android, Vertex AI, Image Classification, Google.
https://youtube.com/shorts/vexNFc5Tj9Y?feature=share Step 5: Where do I go next? This proof-of-concept has sparked my curiosity. I want to try this with other edge platforms like iOS or Raspberry Pi and explore alternative libraries beyond MediaPipe. An alternative to MediaPipe is the Anaconda | medium | 901 |
AI, Android, Vertex AI, Image Classification, Google.
project, which looks very promising. I will need to add more PEZ dispensers to my model, and I would like to use AutoTrain from Hugging Face for this, so I can compare ease of use, and cost. Conclusion Custom image classification has a wide range of uses, from the whimsical (PEZ!) to far more | medium | 902 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
📚Chapter: 5 -Logistic Regression Introduction Logistic Regression is a widely used machine learning algorithm for binary classification problems. Whether it’s predicting if an email is spam or determining if a student will pass or fail an exam, logistic regression proves to be a valuable tool in | medium | 904 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
the data scientist’s toolkit. Central to the success of logistic regression is the concept of a cost function, a crucial element that guides the model in its quest to find the optimal parameters for accurate predictions. In this tutorial we’ll talk about how to fit the parameters theta for logistic | medium | 905 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
regression. In particular, I’d like to define the optimization objective or the cost function that we’ll use to fit the parameters. Sections The Basic Supervised learning problem The Logistic Function: Cost Function in linear Regression Non-Convex function Cost function in logistic regression (The | medium | 906 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
Log Loss Function) Properties logistic regression Cost function Conclusion Section 1- The Basic Supervised learning problem Before delving into the cost function, let’s revisit the basics of logistic regression. Unlike linear regression, which predicts continuous values, logistic regression is | medium | 907 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
employed for binary classification tasks. It predicts the probability that an instance belongs to a particular class, usually denoted as 0 or 1. Here’s to supervised learning problem of fitting a logistic regression model. We have a training set of M training examples. And as usual each of our | medium | 908 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
examples is represented by feature vector that’s N plus 1 dimensional. And as usual we have X 0 equals 1. Our first feature, or our 0 feature is always equal to 1, and because this is a classification problem, our training set has the property that every label Y, is either 0 or 1. This is a | medium | 909 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
hypothesis and the parameters of the hypothesis is this theta over here. And the question I want to talk about is given this training set how do we choose, or how do we fit the parameters theta? Section 2- The Logistic Function: The logistic function, often referred to as the sigmoid function, is | medium | 910 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
at the heart of logistic regression. It transforms any real-valued number into a value between 0 and 1, making it suitable for probability estimation. The sigmoid function is mathematically expressed as: Section 2- Cost Function in linear Regression Back when we were developing the linear | medium | 911 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
regression model, we use the following cost function. I’ve written this slightly differently, where instead of 1/2m, I’ve taken the 1/2 and put it inside the summation instead. Now, I want to use an alternative way of writing out this cost function which is that instead of writing out this squared | medium | 912 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
and return here, let’s write here, cost of H of X comma Y, and I’m going to define that term cost of H of X comma Y to be equal to this. It’s just equal to just one half of the square root error. So now, we can see more clearly that the cost function is a sum over my training set, or is 1/m times | medium | 913 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
the sum over my training set of this cost term here. And to simplify this equation a little bit more, it’s gonna be convenient to get rid of those superscripts. So just define cost of H of X comma Y to be equal to 1/2 of this square root error and the interpretation of this cost function is that | medium | 914 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
this is the cost I want my learning algorithm to, you know, have to pay, if it outputs that value it this prediction is H of X, and the actual label was Y. So just cross off those superscripts. All right. And no surprise for linear regression the cost for you to define is that. Well the cost for | medium | 915 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
this is, that is 1/2 times the square difference between what are predicted and the actual value that we observe for Y. Now, this cost function worked fine for linear regression, but here we’re interested in logistic regression. If we could minimize this cost function that is plugged into J here. | medium | 916 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
That will work okay. But it turns out that if we use this particular cost function this would be a non-convex function of the parameters theta. Section 3- Non-Convex function Here’s what I mean by non-convex. We have some cost function J of theta, and for logistic regression this function H here | medium | 917 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
has a non linearity, right? It says, you know, 1 over 1 plus E to the negative theta transfers X. So it’s a pretty complicated nonlinear function. And if you take the sigmoid function and plug it in here and then take this cost function and plug it in there, and then plot what J of theta looks | medium | 918 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
like, you find that J of theta can look like a function just like this. You know with many local optima and the formal term for this is that this a non convex function. And you can kind of tell. If you were to run gradient descent on this sort of function, it is not guaranteed to converge to the | medium | 919 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
global minimum. Whereas in contrast, what we would like is to have a cost function J of theta that is convex, that is a single bow-shaped function that looks like this, so that if you run gradient descent, we would be guaranteed that gradient descent, you know, would converge to the global minimum. | medium | 920 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
And the problem of using the the square cost function is that because of this very non linear sigmoid function that appears in the middle here, J of theta ends up being a non convex function if you were to define it as the square cost function. So what we’d would like to do is to instead come up | medium | 921 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
with a different cost function that is convex and so that we can apply a great algorithm like gradient descent and be guaranteed to find a global minimum. Section 4- Cost function in logistic regression (The Log Loss Function) The cost function in logistic regression is designed to measure how well | medium | 922 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
the model predicts the target variable compared to the actual values. The goal is to minimize this cost function to optimize the model’s parameters. The most commonly used cost function for logistic regression is the log loss (or cross-entropy loss) function. Here’s a cost function that we’re going | medium | 923 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
to use for logistic regression. We’re going to say the cost or the penalty that the algorithm pays if it outputs a value H of X. So, this is some number like 0.7 where it predicts a value H of X. And the actual cost label turns out to be Y. The cost is going to be minus log H of X if Y is equal 1. | medium | 924 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
And minus log, 1 minus H of X if Y is equal to 0. This looks like a pretty complicated function. But let’s plot function to gain some intuition about what it’s doing. Let’s start up with the case of Y equals 1. If Y is equal equal to 1, then the cost function is -log H of X, and if we plot that, so | medium | 925 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
let’s say that the horizontal axis is H of X. So we know that a hypothesis is going to output a value between 0 and 1. Right? So H of X that varies between 0 and 1. If you plot what this cost function looks like. You find that it looks like this. One way to see why the plot like this it is because | medium | 926 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
if you were to plot log Z with Z on the horizontal axis. Then that looks like that. And it’s approach is minus infinity. So this is what the log function looks like. And so this is 0, this is 1. Here Z is of course playing the row of H of X. And so minus log Z will look like this. Right just | medium | 927 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
flipping the sign. Minus log Z. And we’re interested only in the range of when this function goes between 0 and 1. So, get rid of that. And so, we’re just left with, you know, this part of the curve. And that’s what this curve on the left looks like. Section 5- Properties logistic regression Cost | medium | 928 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
function Now this cost function has a few interesting and desirable properties. First you notice that if Y is equal to 1 and H of X is equal 1, in other words, if the hypothesis exactly, you know, predicts H equals 1, and Y is exactly equal to what I predicted. Then the cost is equal 0. Right? That | medium | 929 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
corresponds to, the curve doesn’t actually flatten out. The curve is still going. First, notice that if H of X equals 1, if the hypothesis predicts that Y is equal to 1. And if indeed Y is equal to 1 then the cost is equal to 0. That corresponds to this point down here. Right? If H of X is equal to | medium | 930 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
1, and we’re only concerned the case that Y equals 1 here. But if H of X is equal to 1. Then the cost is down here is equal to 0. And that is what we like it to be. Because, you know, if we correctly predict the output Y then the cost is 0. But now, notice that H of X approaches 0. So, that’s H. As | medium | 931 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
the output of the hypothesis approaches 0 the cost blows up, and it goes to infinity. And what this does is it captures the intuition that if a hypothesis, you know, outputs 0. That’s like saying, our hypothesis is saying, the chance of Y equals 1 is equal to 0. It’s kind of like our going to our | medium | 932 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
medical patient and saying, “The probability that you have a malignant tumor, the probability that Y equals 1 is zero.” So, it’s like absolutely impossible that your tumor is malignant. But if it turns out that the tumor, the patient’s tumor, actually is malignant. So if Y is equal to 1 even after | medium | 933 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
we told them you know, the probability of it happening is 0. It’s absolutely impossible for it to be malignant. But if we told them this with that level of certainty, and we turn out to be wrong, then we penalize the learning algorithm by a very, very large cost, and that’s captured by having this | medium | 934 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
cost goes infinity if Y equals 1 and H of X approaches 0. This might consider of Y1, let’s look at what the cost function looks like for Y0. If Y is equal to 0, then the cost looks like this expression over here. And if you plot the function minus log 1 minus Z what you get is the cost function | medium | 935 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
actually looks like this. So, it goes from 0 to 1. Something like that. And so if you plot the cost function for the case of y equals zero, you find that it looks like this and what this curve does is it now blows up, and it goes to plus infinity as H of X goes to 1. Because it’s saying that if Y | medium | 936 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
turns out to be equal to 0, but we predicted that you know, Y is equal to 1 with almost certainty with probability 1, then we end up paying a very large cost. Let’s plot the cost function for the case of Y equals 0. So if Y equals 0 that’s going to be our cost function. If you look at this | medium | 937 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
expression, and if you plot, you know, minus log 1 minus Z, if you figure out what that looks like, you get a figure that looks like this. Where, which goes from 0 to 1 with the Z axis on the horizontal axis. So If you take this cost function and plot it for the case of Y equals 0, what you get is | medium | 938 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
that the cost function looks like this. And what this cost function does is that it blows up or it goes to a positive infinity as each H of X goes to one and this captures the intuition that if a hypothesis predicted that, you know, H of X is equal to 1 with certainty, with like probability 1, it’s | medium | 939 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
absolutely got to be Y equals 1. But if Y turned out to be equal to 0 then it makes sense to make the hypothesis, or make the learning algorithm pay a very large cost. And conversely, if H of X is equal to 0 and Y equals zero, then the hypothesis nailed it. The predicted Y is equal to zero and it | medium | 940 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
turns out Y is equal to zero so at this point the cost function is going to be 0. Conclusion In this tutorial, we have defined the cost function for a single training example. The topic of convexity analysis is beyond the scope of this course. But it is possible to show that with our particular | medium | 941 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
choice of cost function this would give us a convex optimization problem as cost function, overall cost function J of theta will be convex and local optima free. In the next tutorial we’re going to take these ideas of the cost function for a single training example and develop that further and | medium | 942 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
define the cost function for the entire training set, and we’ll also figure out a simpler way to write it than we have been using so far. And based on that will work out gradient descent, and that will give us our logistic regression algorithm. Please Follow and 👏 Clap for the story courses teach | medium | 943 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
to see latest updates on this story If you want to learn more about these topics: Python, Machine Learning Data Science, Statistic For Machine learning, Linear Algebra for Machine learning Computer Vision and Research Then Login and Enroll in Coursesteach to get fantastic content in the data field. | medium | 944 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
Stay tuned for our upcoming articles because we research end to end , where we will explore specific topics related to Machine Learning in more detail! Remember, learning is a continuous process. So keep learning and keep creating and Sharing with others!💻✌️ Note:if you are a Machine Learning | medium | 945 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
export and have some good suggestions to improve this blog to share, you write comments and contribute. if you need more update about Machine Learning and want to contribute then following and enroll in following 👉Course: Machine Learning (ML) 👉📚GitHub Repository 👉 📝Notebook Do you want to get into | medium | 946 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
data science and AI and need help figuring out how? I can offer you research supervision and long-term career mentoring. Skype: themushtaq48, email:[email protected] Contribution: We would love your help in making coursesteach community even better! If you want to contribute in some courses , | medium | 947 |
Cost Function, Non Convex Optimization, Logistic Regression, Log Loss Function.
or if you have any suggestions for improvement in any coursesteach content, feel free to contact and follow. Together, let’s make this the best AI learning Community! 🚀 👉WhatsApp 👉 Facebook 👉Github 👉LinkedIn 👉Youtube 👉Twitter Source 1- Machine Learning — Andrew | medium | 948 |
Programming, Matlab, Schools, Work, Computers.
I’ve always thought of myself as an engineer or a scientist instead of a programmer. I’ve been wary of coding interviews because I feel like I missed something not having taken the first courses in C++ in college. However, my experience with programming has been storied and diverse. I haven’t come | medium | 949 |
Programming, Matlab, Schools, Work, Computers.
across a programming problem too hard to tackle aside from confidence. MS-DOS My first encounter with computers was using MS-DOS and learning how to navigate command line. CD Space was my life. In fourth grade, my older brother started writing simple Basic scripts, usually with a lot of colors and | medium | 950 |
Programming, Matlab, Schools, Work, Computers.
a count down to your hard drive being erased as a joke (nothing was erased). It’s crazy to think people will actually erase the hard drive. I had a computer class in middle school, but it wasn’t programming. Then in high school, I got a TI-86, the most powerful graphics calculator that they would | medium | 951 |
Programming, Matlab, Schools, Work, Computers.
allow in an examine. Most people had a TI-83, but for some reason I had an 86. I’m not sure why these graphics calculators were allowed for examines because they were quite powerful. You could also make notes, so I’m not sure how many people cheated outright with them on every test. I was more | medium | 952 |
Programming, Matlab, Schools, Work, Computers.
interested in being able to play video games. The video games available seemed to be mostly low quality ports of Gameboy games. Mario, Tetris, Zelda, MegaMan, and Drug Wars. Drug Wars was an intriguing game of risk, but once you figured out the trick, it was easy to beat. If I remember correctly, | medium | 953 |
Programming, Matlab, Schools, Work, Computers.
the trick was to travel to new boroughs as frequently as possible. Now, the content of the game was something else… I had a slight fear that a teacher could ask me to delete everything on my calculator. So I decided to write a simple program to make a fake settings menu where it could appear as if | medium | 954 |
Programming, Matlab, Schools, Work, Computers.
you reset your calculator. I started simple, but I ended up with a long program to simulate all the menus that one can go through in the settings. I never had to use it, and I accidentally deleted it at some point, but it was my first major jaunt into programming. My next programming adventure was | medium | 955 |
Programming, Matlab, Schools, Work, Computers.
short-lived in Maple during freshman year of college. Maple tried to compete with Matlab, but ultimately, Matlab was just a better language. Matlab During my sophomore year, I took my one and only official programming class. It was Introduction to C for engineers taught by a civil engineer. I’m not | medium | 956 |
Programming, Matlab, Schools, Work, Computers.
sure why we didn’t take the first programming course for CS majors, but I regret that I didn’t. I didn’t like the class, and I didn’t think I enjoyed coding at all. I didn’t care for programming as a profession; it was simply a tool. My ode to Matlab and Jet Color Scheme Junior year was the year | medium | 957 |
Programming, Matlab, Schools, Work, Computers.
that changed everything. It’s when I fell in love with image processing. I also fell in love with Matlab. I had to use it for the class, and I poured more hours into that class than any other class I had taken before. I taught myself a solid basis, and over the years I added to that. I pushed forth | medium | 958 |
Programming, Matlab, Schools, Work, Computers.
with Matlab because I believed the development time speed up outweighed its potential slow runtime. I also minimized runtime by writing efficient code, and within a few years, the Java engine for Matlab also sped up. Diversifying Languages In grad school, I learned quite a bit about Linux, and I | medium | 959 |
Programming, Matlab, Schools, Work, Computers.
programmed in C to allow my rig to capture camera data. My main processing was in Matlab, and then I would run experiments using C and C++ on a distributed network. I learned a large chunk of C++ at my first job where I had to write the production code for a few research projects I had done. When I | medium | 960 |
Programming, Matlab, Schools, Work, Computers.
left that job, I felt very confident in my C++ programming ability, but within a few months of not using C++, it disappeared from my brain. I also used a lot of Matlab for data analysis at my first job. At Apple, I worked primarily in Matlab and then a few Python scripts. I did a little firmware | medium | 961 |
Programming, Matlab, Schools, Work, Computers.
coding, which was eye opening. The challenges of firmware were quite new to me, particularly having high quality standards because firmware failures are catastrophic. After all of these years, I still love Matlab. To me, it is like a warm blanket, and I feel confident writing code. The biggest | medium | 962 |
Programming, Matlab, Schools, Work, Computers.
lesson I’ve learned is that all coding languages are just languages rooted in math and functionality. By understanding one coding language, you could more easily learn another. This is also why I’ve stayed close to Matlab because it is the foundation of my coding adventure; Matlab has allowed me to | medium | 963 |
Programming, Matlab, Schools, Work, Computers.
program in other languages with relative ease. If you like, follow me on Twitter and YouTube where I post videos of espresso shots on different machines and espresso related stuff. You can also find me on LinkedIn. Further readings of mine: My coffee setup Staccato Espresso: Leveling Up Espresso | medium | 964 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
This feels like I’m writing a sci-fi movie and have to slip the word “quantum” in somehow, like “Oh you wanna see the T-Rex in action? lemme get the quantum dot accelerator ready.” Seriously though, what are they? Quantum dots are basically metal crystals. Metal crystals so small, you can fit | medium | 966 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
thousands on this full stop. They lie on the cusp of the quantum and the macro world, which leads to interesting properties. Quantum dots are also called as artificial atoms as they can absorb and release packets of energy just like actual atoms. Let’s look at some of their properties/advantages: | medium | 967 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
Conductivity boost All matter with a definite mass have wave characteristics. The wavelength arising from this wave character increases as the mass decreases. Quantum materials are sized in that sweet spot where their size, in atleast one dimension, is comparable to their deBroglie wavelength. When | medium | 968 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
this happens, it is called quantum confinement. In the case of a quantum dot, It is said to be dimensionless, or quantum-ly(?) confined in all the three dimensions. Quantum confinement leads to unique benefits in conductivity. The electrons in the quantum dots are so mobile that they can move much | medium | 969 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
faster and without resistance, thanks to quantum confinement. Frequency-current emission Quantum confinement can also enhance the photoelectric emission of the quantum dot. Photoelectric emission refers to generating electrons by absorbing the energies of the incoming photons (Solar cells). Do note | medium | 970 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
that this process can happen in reverse as well (LEDs). Now quantum confinement amps this process up in the following ways: It helps to define the quantized energy levels to a greater definition, which results in better control over the current or the frequency of the light emitted. The band gap | medium | 971 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
energy is greatly reduced as the electrons are more mobile due to the confinement. This results in lesser energy wasted to cross the energy threshold, leading to a much better efficiency. Electromagnetic shielding Now we venture into the espionage and secrecy world. Ever heard of Faraday cages? | medium | 972 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
Faraday cages are the brainchild of the brilliant British physicist, Michael Faraday. They are basically a cube or any shape with definite volume, made out of a conducting material. The physics suggests that whatever electromagnetic field is emitted inside the faraday cage ( due to mobile phones, | medium | 973 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
electronic bugs etc) cannot travel outside the cage as they are constrained to only move along the walls of the cage. As the Pope once said “anything can be made better with quantum mechanics” Quantum dots are conductors and as the Pope said, they can indeed make Faraday cages much better. We had | medium | 974 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
discussed earlier that we can control their operation frequency with their sizes. This means that we can choose a frequency range to block, fabricate the QD in the size which ensures that the desired frequency range does not get emitted. This results in a faraday cage, in theory. I am actually | medium | 975 |
Quantum Mechanics, Electronics, Physics, Technology, Engineer.
conducting a research experiment on using quantum dots as a faraday cage as part of my college course. I will start a series on this to keep you guys posted! Also will be starting a series on semiconductor physics, where I hope to explain quantum dots in much more detail. Hope to see you there! As | medium | 976 |
Cryptography, Cybersecurity.
Spotify [here] Apple [here] When I finally look back on my career, one of the highlights will certainly be the opportunity to meet one of my cryptography heroes: Whitfield (Whit) Diffie. Overall, he is one of the greatest Computer Scientists ever, and — along with Marty Hellman — was one of the | medium | 978 |
Cryptography, Cybersecurity.
first to propose the usage of public key encryption and co-created the Diffie-Hellman (DH) key exchange method. Overall, the Diffie-Hellman method is still used in virtually every Web connection on the Internet and has changed from using discrete log methods to elliptic curve methods. In 2015, | medium | 979 |
Cryptography, Cybersecurity.
Whitfield was also awarded the ACM Turing Prize — and which is the Nobel Prize equivalent in Computer Science. The Father of Cryptography Whitfield (Whit) was first exposed to cryptography at the age of 10 (5th Grade) when a teacher gave a talk for a day and a half. He got serious into cryptography | medium | 980 |
Cryptography, Cybersecurity.
through the development of DES (Data Encryption Standard), and Whit thought that the standard should have more bits to make it more secure. In the early 1970s, Larry Roberts — the creator of the Internet — started and investment in the security for ARPANET. This started a major drive into finding | medium | 981 |
Cryptography, Cybersecurity.
methods that could protect the data that travelled over the public network. Larry was a great believer in investing in academic work, and this kick started a drive toward network security — mainly focused on cryptography at the time. Though his interested in the DES method, Whit took a trip in 1974 | medium | 982 |
Cryptography, Cybersecurity.
to the IBM Yorktown Research Lab, and hoped to meet the creator of the DES method: Horst Feistel. Unfortunately, Horst was not around at the time of the visit, but he was told that Marty Hellman at Stanford would be an interesting person for him to chat with. Whit then set up a short meeting Marty | medium | 983 |
Cryptography, Cybersecurity.
at Stanford (in fact, just 30 minutes) and where they discovered that they had shared interests. In fact, they got on so well that Marty invited Whit and Marty (his wife) to dinner that evening. And, so, Whit arrived at Stanford, and started to investigate the encryption key distribution problem. | medium | 984 |
Cryptography, Cybersecurity.
In four years, Whit and Marty discovered public key encryption. Whit was initially motivated at the IFF (Identification, Friend or Foe) radar system [here], and where a plane could challenge another plane to identify itself by re-encrypting an encrypted message. The problem with this is that an | medium | 985 |
Cryptography, Cybersecurity.
enemy plane could simply play back the message and produce a valid encrypted message. The work has further led to the IFF Mark XII method. For this, he understood that a weaknesses of digital systems would be the opportunity to copy digital signals (as with the IFF system). He thus spotted that you | medium | 986 |
Cryptography, Cybersecurity.
could perhaps recognize the solution to a problem without actually being able to solve it yourself. This could then be applied to negotiate keys with someone that you have never met before. And, so, the discrete log method of exchanging keys was born. Around 1978, it is thought that a chat David | medium | 987 |
Cryptography, Cybersecurity.
Chaum, motivated him into the creation of cryptocurrency. A great shinning light in his world was his wife, Mary (Fisher), and who’s charm helped support Whit throughout his career. The Diffie-Hellman method The Diffie-Hellman (DH) method is perhaps one of the greatest inventions in Cybersecurity | medium | 988 |
Cryptography, Cybersecurity.
and was created by Whitfield Diffie and Marty Hellman: With the DH method, Bob creates a random value (b) and Alice also creates a random value (a). Next, Bob computes: B=g^b (mod p) and sends it to Alice. Alice computes: A=g^a (mod p) and sends this to Bob. Bob raises the value of A to the power | medium | 989 |
Cryptography, Cybersecurity.
of b and takes (modp), and Alice raises B to the power of a and takes (mod p). In the end, they will have the same shared value: g^{ab} (mod p) This can then be used to derive an encryption key that they can use for a secure tunnel (Figure 1). Overall, p is the large prime number, and also known as | medium | 990 |
Programming, Ble, Software Development.
The world of wireless connectivity is about to witness a major transformation. A Seattle-based company called Hubble Network has made history by successfully connecting a standard Bluetooth device on Earth to a satellite orbiting 600 km above the planet. As a company focused on IoT and BLE | medium | 992 |
Programming, Ble, Software Development.
solutions, we at Sparkleo Technologies recognize the seismic shift this represents. This groundbreaking achievement shatters the boundaries of Bluetooth Low Energy (BLE), ushering in a new era where truly global connectivity for the Internet of Things (IoT) becomes a reality. Why This Matters Until | medium | 993 |
Programming, Ble, Software Development.
now, most IoT devices relied on Wi-Fi or cellular networks for communication. These networks, while powerful, have inherent limitations. Wi-Fi suffers from restricted range, and cellular networks can be expensive, power-hungry, and unavailable in remote areas. Hubble Network’s breakthrough offers a | medium | 994 |
Programming, Ble, Software Development.
compelling alternative, allowing billions of BLE-enabled devices to communicate seamlessly across the globe, regardless of terrestrial infrastructure. Hubble Network’s focus on BLE is strategically brilliant. Here’s why it’s a game-changer: Affordability Unlocks Access: BLE chips are exceptionally | medium | 995 |
Programming, Ble, Software Development.
cost-effective to manufacture and integrate into devices. Satellite-based connectivity suddenly becomes much more accessible to businesses of all sizes. Champion of Energy Efficiency: BLE is known for its remarkably low power consumption. This is a tremendous advantage for sensors, trackers, or | medium | 996 |
Programming, Ble, Software Development.
wearables operating in remote locations — long battery life is essential when recharging isn’t simple. A Ready-Made Market: The sheer number of existing smartphones, wearables, and sensors equipped with BLE is staggering. This eliminates a major adoption hurdle — vast numbers of devices are already | medium | 997 |
Programming, Ble, Software Development.
compatible with this satellite-based solution. How Does It Work? Hubble Network’s genius lies in a clever combination of software ingenuity and specialized hardware: The Firmware The truly remarkable feat is that standard BLE devices require only a firmware update to start ‘talking’ to satellites. | medium | 998 |
Programming, Ble, Software Development.
This adaptability speaks volumes about the versatility of BLE technology and reduces the need for entirely new hardware. Space-Age Antennas Hubble Network’s satellites boast specialized phased array antennas. These antennas act as signal amplifiers, capable of ‘hearing’ faint BLE transmissions from | medium | 999 |
Programming, Ble, Software Development.
Earth and beaming data back down. The Doppler Effect Satellite communication comes with unique challenges. The Doppler effect, where radio frequencies are distorted due to the high speeds of orbiting satellites, could have been a showstopper. Hubble’s engineers have developed sophisticated systems | medium | 1,000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.