text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Culture, Mental Health, Productivity, Relationships, Self Improvement. 1. Unleashing the Power of Reflection: Holding Up the Mirror to Our Progress Reflection is much more than a mere act of self-contemplation; it serves as the catalyst for fostering a heightened sense of self-awareness. Reflection involves pausing, looking back, and evaluating your journey thus far.
medium
794
Culture, Mental Health, Productivity, Relationships, Self Improvement. When was the last time you put a halt to your daily rush and spent a few quiet moments analyzing the path you’ve trodden, the landmarks you’ve passed, and the challenges you’ve surmounted? Let’s turn our eyes towards one of the world’s most revered personalities, Oprah Winfrey. Throughout her
medium
795
Culture, Mental Health, Productivity, Relationships, Self Improvement. highly acclaimed career, Oprah has been a strong proponent of the power of self-reflection. This practice has been an integral part of her routine, allowing her to consistently monitor her progress, comprehend her experiences, and learn from her missteps. There is no doubt that her consistent
medium
796
Culture, Mental Health, Productivity, Relationships, Self Improvement. self-inquiry has been a vital component in the grand scheme of her monumental success. 2. Embracing the Ability to Adapt: Shaping Your Path in Response to Life’s Twists and Turns Adaptation can be viewed as the art of reshaping our path, informed by the valuable insights obtained through our
medium
797
Culture, Mental Health, Productivity, Relationships, Self Improvement. reflective practices. It is about having the courage to realign our trajectory when we realize that our present course is not in sync with our desired destination. Do you possess the audacity to veer off your current path when you discern it’s leading you astray from your envisioned future? Take a
medium
798
Culture, Mental Health, Productivity, Relationships, Self Improvement. moment to delve into the life of Arianna Huffington, the formidable co-founder of the Huffington Post. When she found herself teetering on the precipice of burnout, she mustered the courage to drastically change her outlook towards her work and personal definition of success. Her willingness to
medium
799
Culture, Mental Health, Productivity, Relationships, Self Improvement. alter her course in response to her life’s circumstances led her to conceive Thrive Global, a venture focused on mitigating the modern day scourge of stress and burnout. This conscious shift serves as a powerful testament to the transformative power of adaptation. 3. The Harmony of Reflection and
medium
800
Culture, Mental Health, Productivity, Relationships, Self Improvement. Adaptation: The Pulse of Continual Evolution Reflection and adaptation are not isolated incidents or occasional practices; they are, in fact, two integral facets of the personal growth equation. Through their regular application, you ensure that your life’s voyage stays attuned to your
medium
801
Culture, Mental Health, Productivity, Relationships, Self Improvement. ever-shifting dreams and the undulating terrain of your circumstances. Are you ready to commit to this unending cycle of growth and evolution, a journey marked not by static stages, but by fluid, progressive development? The philosophy of Ray Dalio, the visionary founder of Bridgewater Associates,
medium
802
Culture, Mental Health, Productivity, Relationships, Self Improvement. offers a striking example of this dynamic. Dalio champions the concept of ‘radical truth and transparency’, fostering an environment that promotes continuous self-evaluation and adaptation amongst his team members. This unique business philosophy has not only been instrumental to Bridgewater’s
medium
803
Culture, Mental Health, Productivity, Relationships, Self Improvement. remarkable success, but it has also been an integral part of Dalio’s own journey of personal and professional evolution. 4. Surmounting the Challenges: Transcending Resistance to Change Change is a natural instigator of resistance; a reflexive reaction to the discomfort of the unknown. To
medium
804
Culture, Mental Health, Productivity, Relationships, Self Improvement. effectively adapt, one must reframe their perspective to see change not as a daunting adversary but as an ally guiding towards growth. Are you prepared to challenge your fears, break through your inertia, and clear the pathway for transformational shifts? The story of Reed Hastings, the co-founder
medium
805
Culture, Mental Health, Productivity, Relationships, Self Improvement. of Netflix, serves as an illuminating example of this principle. When he decided to separate Netflix’s DVD rental and streaming services, he faced a whirlwind of criticism and an immediate loss of subscribers. This backlash was a product of his daring to upset the status quo. However, Hastings
medium
806
Culture, Mental Health, Productivity, Relationships, Self Improvement. remained steadfast in the face of resistance, his decision proving to be a visionary move that ultimately catapulted Netflix into its current position as a global leader in the streaming industry. This narrative underscores the importance of seeing beyond immediate obstacles and embracing change as
medium
807
Culture, Mental Health, Productivity, Relationships, Self Improvement. a catalyst for long-term success. Conclusion: Sail Your Destiny — Reflection and Adaptation as Your Trusty Compass and Rudder Navigating your journey towards your aspirations is akin to sailing a ship on the high seas, where reflection and adaptation serve as your compass and rudder. Are you ready
medium
808
Culture, Mental Health, Productivity, Relationships, Self Improvement. to embark on your voyage? Will you delve into self-reflection as Oprah Winfrey does, dare to adapt like Arianna Huffington, commit to continuous evolution like Ray Dalio, and brave through changes as Reed Hastings did? As the ancient Chinese proverb says, “The wise adapt themselves to
medium
809
Culture, Mental Health, Productivity, Relationships, Self Improvement. circumstances, as water molds itself to the pitcher.” So, take control of your ship, set your gaze on the horizon, adjust your sails as the winds of life shift, and chart your unique course to success. Your ocean of opportunities awaits!
medium
810
Sierpinski, Triangle, Similarity, Data Science. The Sierpinski Triangle is a beautiful and intricate fractal pattern that has captured the imagination of mathematicians and artists alike. This triangle is named after the Polish mathematician Wacław Sierpiński, who discovered it in 1916 while investigating sets that are not measurable in the
medium
811
Sierpinski, Triangle, Similarity, Data Science. usual sense. The Sierpinski Triangle is a self-repeating pattern that is generated by removing smaller and smaller triangles from a larger triangle. In this article, we will explore the properties and applications of the Sierpinski Triangle, as well as its historical significance. This story was
medium
812
Sierpinski, Triangle, Similarity, Data Science. written with the assistance of an AI writing program. Photo by Pawel Czerwinski on Unsplash The Sierpinski Triangle is constructed by starting with a single equilateral triangle. The triangle is then divided into four smaller triangles by drawing lines from each of the vertices to the midpoint of
medium
813
Sierpinski, Triangle, Similarity, Data Science. the opposite side. The middle triangle is then removed, leaving three smaller triangles. This process is repeated for each of the remaining triangles, ad infinitum. The result is a fractal pattern that is self-similar, meaning that it looks the same at any scale. One of the most striking features
medium
814
Sierpinski, Triangle, Similarity, Data Science. of the Sierpinski Triangle is its dimensionality. Unlike most geometric shapes, which have a fixed dimension (e.g. a line has one dimension, a square has two dimensions, and a cube has three dimensions), the Sierpinski Triangle has a fractional dimension. This means that the Sierpinski Triangle has
medium
815
Sierpinski, Triangle, Similarity, Data Science. some properties of a one-dimensional object, such as being able to be covered by a line, but also has some properties of a two-dimensional object, such as having an area. The fractal dimension of the Sierpinski Triangle is approximately 1.585, which is greater than one but less than two. The
medium
816
Sierpinski, Triangle, Similarity, Data Science. Sierpinski Triangle has many interesting properties that make it a fascinating object of study. One of these properties is its self-similarity, which means that it looks the same at any scale. This property has many applications in fields such as computer graphics, where fractal patterns can be
medium
817
Sierpinski, Triangle, Similarity, Data Science. used to generate realistic-looking textures and landscapes. Another interesting property of the Sierpinski Triangle is its fractal dimensionality, which makes it useful in the study of chaos theory and complex systems. Fractals like the Sierpinski Triangle are often used to model complex phenomena
medium
818
Sierpinski, Triangle, Similarity, Data Science. such as the growth of plants, the spread of diseases, and the behavior of financial markets. If you are interested in prediction of financial markets, check some of my previous articles below. Stock Price Prediction A review for beginnersmedium.com Stock Price Prediction Methods worth using to
medium
819
Sierpinski, Triangle, Similarity, Data Science. predict Stock Pricesmedium.com Stock Price Prediction Features that are handy in Stock Price Prediction with Machine Learningmedium.com The Sierpinski Triangle also has historical significance, as it was one of the first fractals to be discovered and studied. Wacław Sierpiński was a prominent
medium
820
Sierpinski, Triangle, Similarity, Data Science. mathematician of the early 20th century who made significant contributions to number theory, set theory, and topology. His work on the Sierpinski Triangle helped to pave the way for the study of fractal geometry, which has become an important area of mathematics and science in the years since. One
medium
821
Sierpinski, Triangle, Similarity, Data Science. interesting application of the Sierpinski Triangle is in the field of cryptography. Cryptography is the study of encoding and decoding information to keep it secure from unauthorized access. One way to do this is through the use of fractal patterns such as the Sierpinski Triangle. By encrypting
medium
822
Sierpinski, Triangle, Similarity, Data Science. information using a pattern like the Sierpinski Triangle, it becomes much more difficult for an attacker to decipher the information, since the pattern is self-similar and has a complex structure. The Sierpinski Triangle has also been used in art and design, where its intricate and beautiful
medium
823
Sierpinski, Triangle, Similarity, Data Science. patterns have inspired artists and designers to create works that incorporate fractal geometry. One famous example is the architecture of the Sagrada Familia in Barcelona, which features columns that are modeled on the Sierpinski Triangle. The Sierpinski Triangle has also been used in music, where
medium
824
Sierpinski, Triangle, Similarity, Data Science. its fractal properties have been applied to generate complex and interesting musical compositions. Despite its simple construction, the Sierpinski Triangle has a rich and complex structure that has captured the imagination of mathematicians and artists alike. Its self-similarity, fractal
medium
825
Sierpinski, Triangle, Similarity, Data Science. dimensionality, and applications in fields such as cryptography, computer graphics, and music have made it a subject of intense study and admiration. One area where the Sierpinski Triangle has been applied in recent years is in the study of network topology. Network topology is the study of how
medium
826
Sierpinski, Triangle, Similarity, Data Science. computers and other devices are connected to one another in a network. The Sierpinski Triangle has been used to model the structure of complex networks, such as the Internet or social networks. By analyzing the properties of the Sierpinski Triangle, researchers can gain insights into the behavior
medium
827
Sierpinski, Triangle, Similarity, Data Science. of these networks and develop strategies for improving their performance or security. Another area where the Sierpinski Triangle has been applied is in the study of quantum mechanics. Quantum mechanics is the branch of physics that deals with the behavior of matter and energy at the smallest
medium
828
Sierpinski, Triangle, Similarity, Data Science. scales. The Sierpinski Triangle has been used to model the fractal nature of quantum mechanical systems, which can have complex and non-intuitive properties. By using the Sierpinski Triangle to model these systems, researchers can gain insights into their behavior and develop new methods for
medium
829
Sierpinski, Triangle, Similarity, Data Science. controlling or manipulating them. In conclusion, the Sierpinski Triangle is a beautiful and fascinating object that has captured the imagination of mathematicians, scientists, and artists for over a century. Its self-similarity, fractal dimensionality, and applications in fields such as
medium
830
Sierpinski, Triangle, Similarity, Data Science. cryptography, computer graphics, network topology, and quantum mechanics have made it a subject of intense study and admiration. The Sierpinski Triangle reminds us of the beauty and complexity of the natural world, and the endless possibilities that exist for exploration and discovery in
medium
831
Coding, Learning To Code, Hacking. Fill up the slots in your utility belt “How do you learn all of these frameworks and languages?”, asked a friend during a hackathon recently. “So basically, you just find a project worth doing that fits what you’re trying to learn and go for it”, I responded. “Really? Just start building
medium
833
Coding, Learning To Code, Hacking. anything?”, he asked curiously. “Yup, and ask the right people for advice. Smart people are usually very cool about teaching and helping others.” I told my friend while thinking (I really should write a blog post about this). The single most important thing someone who wants to work in tech should
medium
834
Coding, Learning To Code, Hacking. learn is how to learn. I’ve been trying to figure this out for a long time. I’ve talked to a bunch of very smart people on learning and acquiring skills. Also I spend a decent chunk of time giving mentoring/advice to my friends and colleagues on how to learn. I plan to break down some tips in this
medium
835
Coding, Learning To Code, Hacking. blog post. Let’s assume you want to learn this language/framework/api/tool/thing, you just don’t really know where to start. Here’s how I would go about it, maybe it will also help you. 1. RTFD — (Read The F***ing Documentation) The most important part of using any language or framework is being
medium
836
Coding, Learning To Code, Hacking. familiar with it’s docs. Go to the homepage of the thing you want to learn and go through their getting started / quickstart tutorial. If they don’t have one (which is rare), or their docs suck (which is not as rare) try to find what experts suggest. Usually there’s a third party tutorial that will
medium
837
Coding, Learning To Code, Hacking. show you what you need to know. Don’t be afraid to dive head on into something new. You’d be surprised how easy it is to pick it up. 2. Find a Project worth building You need to build something that you’re passionate about to be really motivated into learning. If you already have something in mind
medium
838
Coding, Learning To Code, Hacking. then go for it! If not then here’s a couple of tips on finding a project. I’ll be honest, I’m not very good at finding something to build when I want to learn new things. To compensate for that I ask other people for ideas until I come across one that excites me. My friend Yamil Asusta always has
medium
839
Coding, Learning To Code, Hacking. interesting ideas for me to tinker with, and recently he’s given some talks on this topic. I want to share two of his thoughts here: If you’re trying to learn a programming language, learn how HTTP requests work in that language. The way you interact with other tools is mostly through http, and
medium
840
Coding, Learning To Code, Hacking. this will give you a grasp on how the language itself works. Have a pet project that you can build using different tools. Have a pet webapp of moderate complexity that you can build with other frameworks to learn how they work. Building the same thing using different tools will allow you to see how
medium
841
Coding, Learning To Code, Hacking. the tools are different from each other. Finally, if you can’t find a project, try to think of something that can only be built with the framework or tool that you are trying to learn and start building it. 3. Start building your project The hardest part is actually getting started. Don’t put it
medium
842
Coding, Learning To Code, Hacking. off until later. Start with as much time as you can dedicate. Start from wherever the tutorial left you and build your own thing. Fleshing out your project will require searching SO, asking friends and more RTFD. Don’t just write random code thinking “that’s how the framework is supposed to work”
medium
843
Coding, Learning To Code, Hacking. based on past experience. Read the docs, invest time into figuring out common design and architecture patterns. 4. Ask questions to people who know Don’t be afraid to ask. The worst that can happen is that you’ll have to wait longer to get an answer. Even the best experts were beginners at some
medium
844
Coding, Learning To Code, Hacking. point, and most of them feel the need to give back to the community by mentoring and helping out. http://xkcd.com/293/ It is imperative though that you don’t waste anyone’s time. Invest in asking the right questions. Nobody wants to be asked somethings that’s just a Google search away. I was
medium
845
Coding, Learning To Code, Hacking. recently talking with Hector Ramos, who spends a lot of time helping people with Parse, and he mentioned the two easiest type of questions he likes to be asked. I’m trying to implement this with Parse, with this part of the documentation, it’s not working (and I’m getting this error). Can you help
medium
846
Coding, Learning To Code, Hacking. me out? Here’s a code snippet of what I’m trying to do. Can I do x with Parse? Asking questions is about making it easier for the other person to give you an answer. Learn to identify the right people to ask your questions. Usually this happens by learning what type of problem you’re having, and
medium
847
Coding, Learning To Code, Hacking. more than usually the type of people you can ask fall into one of two categories: Those who know A LOT about one specific thing. Those who know a decent amount about a lot of things. Learn to identify these people within, or outside, your network. Don’t hesitate to send them a tweet or a Message.
medium
848
Coding, Learning To Code, Hacking. 5. Deploy your Project Get your project to a state that you consider it “done”. It doesn’t have to be perfect, 100% finished or pretty. You project should be at a point where you feel proud of what you made and are willing to show other people how awesome you are (if you’re reading this it means
medium
849
Coding, Learning To Code, Hacking. you’re awesome). After you finish your project, be sure to put it up on Github and deploy it somewhere for people to see. Having it up somewhere will allow others to critique your work, and will make you accountable for making good stuff. Also it’s a go to place when you want to show future
medium
850
Coding, Learning To Code, Hacking. employers your past experience. Even if you think your code sucks, the fact that your putting it out there says a lot about yourself as a developer. And you’re 1 step ahead than other who have NO code to show. If you asked someone questions, let them know how your learning process is going and show
medium
851
Coding, Learning To Code, Hacking. them your work so they know the time they invested in you was not in vain. Finally, after finishing your first project, move on to the next one. The only way to learn how to code is by writing code. The only way to learn a new framework is to write code in that framework. So keep working and keep
medium
852
Coding, Learning To Code, Hacking. hustling. These are the steps I follow when I want to learn new tools. I leverage my time as a student and build my technical portfolio by following these steps. If you’re a CS student looking to learn new things, like me, I’m confident following these steps will give you a solid start. I’m a
medium
853
NLP, Interpretability, Machine Learning, Pytorch, Captum. A step-by-step tutorial to better understand what your text classification model focuses on when making a prediction Photo by Hello I’m Nik on Unsplash Introduction: In this project, we will be exploring the use of a PyTorch library called Captum in building an interpretability tool applied to a
medium
855
NLP, Interpretability, Machine Learning, Pytorch, Captum. sentiment analysis classification model. This will allow us to visualize what parts of the text most influenced the output. In the first part of this post, we will talk about the text classification model and the data that was used to build it. In the second part, we will look more into the
medium
856
NLP, Interpretability, Machine Learning, Pytorch, Captum. interpretability part using Captum. The code to try this out on your own in Colab notebook or GitHub is available at the end of the post. First, we will build a classic sentiment analysis model using product reviews. The data: We will base our approach on a dataset of reviews, written in English
medium
857
NLP, Interpretability, Machine Learning, Pytorch, Captum. for the most part. Available here: Data Link and Data License The data has around six million reviews, where each review is associated with a text and a rating between 1 and 5 stars. Here is an example of a review: Rating: ⭐️⭐️⭐️⭐️⭐️ Text: That was a great book !!! We will split the reviews into
medium
858
NLP, Interpretability, Machine Learning, Pytorch, Captum. three groups: Positive: 4 or 5 ⭐️ Neutral: 3 ⭐️ Negative: 1 or 2 ⭐️ The Sentiment prediction Pipeline: Sentiment Pipeline: Image by author We first split the text into tokens and then to token indexes. For example: I loved that movie !! Gets transformed to: ['[CLS]', 'i', 'loved', 'that', 'movie',
medium
859
NLP, Interpretability, Machine Learning, Pytorch, Captum. '!', '!', '[SEP]'] And then to: [1, 76, 8459, 7923, 8332, 30, 30, 2] Then, those indexes are mapped to vectors in the embedding layer, the positional embeddings are added on top of that. This sequence of vectors is then fed to the transformer encoder. There is a really good tutorial about the
medium
860
NLP, Interpretability, Machine Learning, Pytorch, Captum. transformer part of the sentiment classification pipeline here, I re-used some part of that code as-is for the word embedding or positional embedding. The pseudo-code of the model looks like this! class Transformer(pl.LightningModule): def __init__( self, ... ): super().__init__() ...
medium
861
NLP, Interpretability, Machine Learning, Pytorch, Captum. self.embeddings = TokenEmbedding(...) self.pos_encoder = PositionalEncoding(...) encoder_layer = torch.nn.TransformerEncoderLayer( ...) self.encoder = torch.nn.TransformerEncoder( ... ) self.linear = torch.nn.Linear(channels, self.n_outputs) self.do = nn.Dropout(p=self.dropout) def encode(self, x):
medium
862
NLP, Interpretability, Machine Learning, Pytorch, Captum. x = self.embeddings(x) x = self.pos_encoder(x) x = self.encoder(x) x = x[:, 0, :] return x def forward(self, x): x = self.do(self.encode(x)) x = self.linear(x) return x That is basically it for the model part, pretty classic for a text classification pipeline. Interpretability: We will be using a
medium
863
NLP, Interpretability, Machine Learning, Pytorch, Captum. library called Captum, which is an interpretability library for PyTorch. We will use the integrated gradient approach, which I already implemented from scratch in one of my previous projects for Computer Vision: How much of your Neural Network’s Prediction can be Attributed to each Input Feature?
medium
864
NLP, Interpretability, Machine Learning, Pytorch, Captum. Peeking inside Deep Neural Networks with Integrated Gradients, Implemented in PyTorch.towardsdatascience.com The idea behind this approach is that we will assign an “attribution” to each of the input features. The trick in the NLP case is that, instead of using the tokens themselves, we will use
medium
865
NLP, Interpretability, Machine Learning, Pytorch, Captum. their word vectors. We can only do this operation for one class at a time. Here, we will pick the positive class, so that we can see which part of the sentence increases or decreases the “positivity” score. Here is how to do it using Captum: First, you need to get the embedding layer of your model,
medium
866
NLP, Interpretability, Machine Learning, Pytorch, Captum. model.embeddings.embedding in this case. Second, we need to define a baseline. Here, it will be an all padding tokens sentence: [CLS], [PAD], [PAD], …, [SEP] Then we can apply the Integrated gradient approach: tokenized = tokenize(text) tokens_idx = tokenized.ids[:MAX_LEN] x =
medium
867
NLP, Interpretability, Machine Learning, Pytorch, Captum. torch.tensor([tokens_idx], dtype=torch.long) ref = torch.tensor(...) x = x.to(device) ref = ref.to(device) base_class = 2 lig = LayerIntegratedGradients( model, model.embeddings.embedding, ) attributions_ig, delta = lig.attribute( x, ref, n_steps=500, return_convergence_delta=True,
medium
868
NLP, Interpretability, Machine Learning, Pytorch, Captum. target=base_class ) Let's see some examples of how that looks: Image by author Image by author We can see which words or groups of words increased the “positivity” score in green and the ones that decreased it in red. Conclusion: In this post, we saw how we can add some interpretability to our text
medium
869
NLP, Interpretability, Machine Learning, Pytorch, Captum. classification pipeline. This can really be useful to better understand how our model makes predictions, either to help improve the approach or to add transparency to a black-box NLP model. You can try it out for yourself using the colab notebook or on GitHub, you will find the trained model and
medium
870
AI, Android, Vertex AI, Image Classification, Google. I’ve always had a soft spot for PEZ dispensers and have been collecting them for over 30 years. These quirky little collectibles come in an incredible variety of shapes and characters. But there’s more to PEZ than just fun; some dispensers can be quite valuable depending on their age and variation.
medium
872
AI, Android, Vertex AI, Image Classification, Google. To address the challenge of identification, I harnessed the power of AI to create an image classification model that could help identify these subtle differences. Use Case: Identifying PEZ dispensers Identifying the precise type of PEZ dispenser isn’t always easy. Take Mickey Mouse dispensers, for
medium
873
AI, Android, Vertex AI, Image Classification, Google. example. While even a novice collector can broadly identify it as Mickey, subtle differences in the face or eyes could separate a common version from a prized rarity. A $1 Mickey and a $150 Mickey can look awfully similar. To illustrate the problem, these Mickey Mouse dispensers look similar, but
medium
874
AI, Android, Vertex AI, Image Classification, Google. are worth drastically different amounts (from left-to-right). It is fairly easy for even an untrained human to tell the difference between these dispensers. There are obvious differences between the shape of the face, and the eyes. Mickey Die-Cut (1961) — $125 Mickey B (1989) — $15 Mickey C (1997)
medium
875
AI, Android, Vertex AI, Image Classification, Google. — $1 Could a Computer Tell the Difference? I decided to see if I could train a custom image classification model to distinguish between PEZ dispensers. That model, in theory, could be embedded into a mobile app or a website, giving PEZ enthusiasts a tool to aid in identification. For this project,
medium
876
AI, Android, Vertex AI, Image Classification, Google. I embraced technologies from Google: Vertex AI to manage my image dataset and train the model, and MediaPipe for easy integration onto edge devices like smartphones. Step 1: Picture This — Building the Dataset Like any good AI project, I needed data. My original plan was to scrape images from the
medium
877
AI, Android, Vertex AI, Image Classification, Google. web, but there simply weren’t enough consistent, high-quality pictures for the level of detail I wanted. The best solution? Become a PEZ photographer! I gathered a dozen dispensers from my collection and meticulously photographed each dispenser from multiple angles, against different backgrounds,
medium
878
AI, Android, Vertex AI, Image Classification, Google. and in varied lighting. This variability helps the AI model generalize better and make it more robust in real-world use. I took at least 20 different images of each item (the data) and separated them into folders with accurate names (the labels). MickeyB image example MickeyB image example MickeyB
medium
879
AI, Android, Vertex AI, Image Classification, Google. image example MickeyB image example MickeyB image example Step 2: Teaching the Machine — Training with Vertex AI With my photo dataset ready, I turned to Vertex AI. The AutoML Vision API made it surprisingly simple. I uploaded my images (carefully organized into folders by dispenser name), selected
medium
880
AI, Android, Vertex AI, Image Classification, Google. the option for “edge” deployment (since my goal was a mobile app), and let it do its thing. Note: Be aware that Vertex AI can be costly, especially as your dataset size grows or you retrain frequently. I used the AutoML Vision API to inspect these images, and automatically extract meaningful
medium
881
AI, Android, Vertex AI, Image Classification, Google. features from them to build a high-performing image classification model. To start using this feature, I opened the Google Cloud console, then from the main menu selected: VertexAI -> Training (under Model Development) -> Train New Model and I was presented the “Train new model” dialog. I pointed
medium
882
AI, Android, Vertex AI, Image Classification, Google. this tool to my Cloud bucket, and I made sure to select the “edge” option, which is necessary, since this model will be deployed to an Android device. I didn’t modify any of the existing defaults, which are used to enhance the accuracy of the model, by using AutoML Vision’s hyperparameter tuning
medium
883
AI, Android, Vertex AI, Image Classification, Google. capabilities. I was already happy with the performance of the results I saw. It was fairly expensive to generate the model. I had a very small dataset, with the minimum number of images in each folder, and cost over $20 to generate the model each time. This might seem like a minimal amount, but I
medium
884
AI, Android, Vertex AI, Image Classification, Google. think this could get very expensive if you had a larger dataset, or if you needed to frequently retrain a model. Step 3: From Cloud to Phone — Deployment and Integration Once the model was trained, Vertex AI provided a deployment interface. This allowed me to test the model on the fly before moving
medium
885
AI, Android, Vertex AI, Image Classification, Google. it towards my Android app. I used the Google Cloud console, and selected: selected: VertexAI -> Model Registry (under Deploy and Use) which showed me a list of all the models I have available to use: Selecting one of the models will lead to another screen with options to evaluate your model, deploy
medium
886
AI, Android, Vertex AI, Image Classification, Google. and download it, do batch predictions or see other details. Select the second tab “Deploy & Test”, and then select “Deploy to Endpoint” to test your model. Once this is complete, you will have a endpoint you can send images to as a smoke test for your model (via CURL or any other command you
medium
887
AI, Android, Vertex AI, Image Classification, Google. choose). This was a great way to test my model quickly, and see if the image classification worked as expected. Note: Watch out for unexpected recurring deployment costs in Vertex AI. I learned this the hard way! Check the pricing before doing anything. Step 4: Integrating the Model into an Android
medium
888
AI, Android, Vertex AI, Image Classification, Google. Application Downloading the model for Android required a slightly cryptic export process, but it ultimately landed me with a .tflite file – the format my app needed. On the same screen used to deploy the model to an endpoint, click the “EXPORT” button for “Edge TPU TF Lite”. This will export the
medium
889
AI, Android, Vertex AI, Image Classification, Google. file to a cloud storage location, where you can then download the file directly using the gsutil command they provide. It will look look similar to this: model-3158426315623759872_tflite_2023-06-09T20_50_40.968622Z_model.tflite Feel free to rename that file to something easier to use but make sure
medium
890
AI, Android, Vertex AI, Image Classification, Google. to keep the .tflite file extension (I changed the it to: pezimages2.tflite). The MediaPipe Library Next stop was MediaPipe, Google’s framework for cross-platform ML applications. Their examples, particularly one focused on image classification, became a lifesaver and was all I needed to test my
medium
891
AI, Android, Vertex AI, Image Classification, Google. model. MediaPipe Examples on Github MEDIA PIPE EXAMPLES FTW! MediaPipe has a great GitHub repo, with a ton of great examples for running AI use cases on a variety of different platforms. They include examples that can be run on Android, iOS, JavaScript, Python, and even RasberryPi. I was able to
medium
892
AI, Android, Vertex AI, Image Classification, Google. find an example for “Image Classification” which I downloaded and used as the base project for my Application. There are a variety of AI use-cases addressed in these examples, and you can run a many different operations including different face and hand detection, image classification/segmentation,
medium
893
AI, Android, Vertex AI, Image Classification, Google. audio classification/detection, and even llm inference. These examples are a great way to get started learning about AI operations on the edge, and to be able to get hands-on experience with many different use-cases on a variety of different platforms. To run this example locally, I opened Android
medium
894
AI, Android, Vertex AI, Image Classification, Google. Studio (I was using the latest stable version — Iguana) and selected “File…Open” then selected the “Android” folder. MediaPipe Example without any changes Results when using the default “efficientnet-lite0.tflite” model After cloning the example repo, I could run it without any changes, to see how
medium
895
AI, Android, Vertex AI, Image Classification, Google. accurately it could classify dispensers. Using the default model, the classification was really bad. It had a hard time identifying the dispenser at all, and when it did work, the results were wrong, including: “lighter” (actually not bad), “punching bag” and “parking meter”.
medium
896