anchor
stringlengths 86
24.4k
| positive
stringlengths 174
15.6k
| negative
stringlengths 76
13.7k
| anchor_status
stringclasses 3
values |
---|---|---|---|
Record your happy, sad and motivational moments with our web-app. We hope to help you with your mental health during this time. When things get hard, remind yourself with a happy moment that has happened in your life, and motivate yourself with the goals you have set for yourself! We plan on further develop this site and increase customizability in the future.
|
## 💡 Inspiration
We got inspiration from or back-end developer Minh. He mentioned that he was interested in the idea of an app that helped people record their positive progress and showcase their accomplishments there. This then led to our product/UX designer Jenny to think about what this app would target as a problem and what kind of solution would it offer. From our research, we came to the conclusion quantity over quality social media use resulted in people feeling less accomplished and more anxious. As a solution, we wanted to focus on an app that helps people stay focused on their own goals and accomplishments.
## ⚙ What it does
Our app is a journalling app that has the user enter 2 journal entries a day. One in the morning and one in the evening. During these journal entries, it would ask the user about their mood at the moment, generate am appropriate response based on their mood, and then ask questions that get the user to think about such as gratuity, their plans for the day, and what advice would they give themselves. Our questions follow many of the common journalling practices. The second journal entry then follows a similar format of mood and questions with a different set of questions to finish off the user's day. These help them reflect and look forward to the upcoming future. Our most powerful feature would be the AI that takes data such as emotions and keywords from answers and helps users generate journal summaries across weeks, months, and years. These summaries would then provide actionable steps the user could take to make self-improvements.
## 🔧 How we built it
### Product & UX
* Online research, user interviews, looked at stakeholders, competitors, infinity mapping, and user flows.
* Doing the research allowed our group to have a unified understanding for the app.
### 👩💻 Frontend
* Used React.JS to design the website
* Used Figma for prototyping the website
### 🔚 Backend
* Flask, CockroachDB, and Cohere for ChatAI function.
## 💪 Challenges we ran into
The challenge we ran into was the time limit. For this project, we invested most of our time in understanding the pinpoint in a very sensitive topic such as mental health and psychology. We truly want to identify and solve a meaningful challenge; we had to sacrifice some portions of the project such as front-end code implementation. Some team members were also working with the developers for the first time and it was a good learning experience for everyone to see how different roles come together and how we could improve for next time.
## 🙌 Accomplishments that we're proud of
Jenny, our team designer, did tons of research on problem space such as competitive analysis, research on similar products, and user interviews. We produced a high-fidelity prototype and were able to show the feasibility of the technology we built for this project. (Jenny: I am also very proud of everyone else who had the patience to listen to my views as a designer and be open-minded about what a final solution may look like. I think I'm very proud that we were able to build a good team together although the experience was relatively short over the weekend. I had personally never met the other two team members and the way we were able to have a vision together is something I think we should be proud of.)
## 📚 What we learned
We learned preparing some plans ahead of time next time would make it easier for developers and designers to get started. However, the experience of starting from nothing and making a full project over 2 and a half days was great for learning. We learned a lot about how we think and approach work not only as developers and designer, but as team members.
## 💭 What's next for budEjournal
Next, we would like to test out budEjournal on some real users and make adjustments based on our findings. We would also like to spend more time to build out the front-end.
|
## Inspiration
Some things can only be understood through experience, and Virtual Reality is the perfect medium for providing new experiences. VR allows for complete control over vision, hearing, and perception in a virtual world, allowing our team to effectively alter the senses of immersed users. We wanted to manipulate vision and hearing in order to allow players to view life from the perspective of those with various disorders such as colorblindness, prosopagnosia, deafness, and other conditions that are difficult to accurately simulate in the real world. Our goal is to educate and expose users to the various names, effects, and natures of conditions that are difficult to fully comprehend without first-hand experience. Doing so can allow individuals to empathize with and learn from various different disorders.
## What it does
Sensory is an HTC Vive Virtual Reality experience that allows users to experiment with different disorders from Visual, Cognitive, or Auditory disorder categories. Upon selecting a specific impairment, the user is subjected to what someone with that disorder may experience, and can view more information on the disorder. Some examples include achromatopsia, a rare form of complete colorblindness, and prosopagnosia, the inability to recognize faces. Users can combine these effects, view their surroundings from new perspectives, and educate themselves on how various disorders work.
## How we built it
We built Sensory using the Unity Game Engine, the C# Programming Language, and the HTC Vive. We imported a rare few models from the Unity Asset Store (All free!)
## Challenges we ran into
We chose this project because we hadn't experimented much with visual and audio effects in Unity and in VR before. Our team has done tons of VR, but never really dealt with any camera effects or postprocessing. As a result, there are many paths we attempted that ultimately led to failure (and lots of wasted time). For example, we wanted to make it so that users could only hear out of one ear - but after enough searching, we discovered it's very difficult to do this in Unity, and would've been much easier in a custom engine. As a result, we explored many aspects of Unity we'd never previously encountered in an attempt to change lots of effects.
## What's next for Sensory
There's still many more disorders we want to implement, and many categories we could potentially add. We envision this people a central hub for users, doctors, professionals, or patients to experience different disorders. Right now, it's primarily a tool for experimentation, but in the future it could be used for empathy, awareness, education and health.
|
losing
|
[project video demo](https://github.com/R1tzG/SignSensei/assets/86858242/40b4d428-f614-4800-8151-0d3d9c74f5af)
## Inspiration
In an increasingly interconnected world, one of the most important skills we can acquire is the ability to communicate effectively with people from diverse backgrounds and abilities. American Sign Language (ASL) is a language used by millions of deaf and hard-of-hearing individuals around the world. However, there are still significant barriers preventing many from learning and using ASL. Our project, SignSensei, aims to break down those barriers, making it easier and faster for anyone to learn ASL, as well as other sign languages. We hope to promote inclusivity through communication for all.
## What it does
SignSensei is a web application that gamifies the process of learning sign language. Using the webcam on your laptop (or front-facing camera on your phone), our app can detect the sign you are putting up with your hand, and tell you whether it is correct. You will be able to see yourself on the screen, as well as a lattice representation of your hand. This makes it easy to monitor your hands to make sure you are getting the signs right. The demo lesson (see video) teaches you the ASL alphabet.
## How we built it
Our sign language detection system is built in two parts. First we collect hand landmark coordinates using the Mediapipe machine learning library. We then pass the extracted coordinates through a custom fully connected neural network that we trained on a dataset of ASL signs. This approach allows us to detect signs from the webcam feed with high precision and accuracy (97% test accuracy on the custom model).
The sign detection system outlined above forms the backbone of our app. We also developed an interactive front-end with Streamlit, which serves lessons to users.
## Challenges we ran into
We were significantly challenged with developing an accurate detection model. Our first few attempts fell short in accuracy. We were eventually able to train a fast and accurate model for the task. Our final model is very simple but performant, made up primarily of Dense layers.
Another challenge we ran into was developing the user interface. At first, we looked at using React, but found it difficult to integrate Tensorflow and OpenCV seamlessly. We decided to switch gears and develop our front-end with Streamlit, leveraging the power of the Python programming language.
## Accomplishments that we're proud of
We are very proud of the powerful sign detection algorithm that we developed. Along with the use case that we found for ASL, the algorithm can easily be expanded to other sign languages, as well as applications in gesture recognition and VR gaming.
## What we learned
Through this project, we learned how to use Tensorflow to train machine learning models, as well as how they can be implemented in Javascript (even if this part didn't make it into the final application). We also learnt about different ways to make a front-end, from vanilla JS and React to solutions such as Flask.
## What's next for SignSensei
We're not done yet! We plan to add more interactive lessons to the app as well as add support for more sign languages.
View our slideshow [here](https://www.canva.com/design/DAFuAQrskMQ/y0TeL7Q-odr6c6klXBmfXA/view?utm_content=DAFuAQrskMQ&utm_campaign=designshare&utm_medium=link&utm_source=publishsharelink)
|
## Inspiration
We wanted to promote an easy learning system to introduce verbal individuals to the basics of American Sign Language. Often people in the non-verbal community are restricted by the lack of understanding outside of the community. Our team wants to break down these barriers and create a fun, interactive, and visual environment for users. In addition, our team wanted to replicate a 3D model of how to position the hand as videos often do not convey sufficient information.
## What it does
**Step 1** Create a Machine Learning Model To Interpret the Hand Gestures
This step provides the foundation for the project. Using OpenCV, our team was able to create datasets for each of the ASL alphabet hand positions. Based on the model trained using Tensorflow and Google Cloud Storage, a video datastream is started, interpreted and the letter is identified.
**Step 2** 3D Model of the Hand
The Arduino UNO starts a series of servo motors to activate the 3D hand model. The user can input the desired letter and the 3D printed robotic hand can then interpret this (using the model from step 1) to display the desired hand position. Data is transferred through the SPI Bus and is powered by a 9V battery for ease of transportation.
## How I built it
Languages: Python, C++
Platforms: TensorFlow, Fusion 360, OpenCV, UiPath
Hardware: 4 servo motors, Arduino UNO
Parts: 3D-printed
## Challenges I ran into
1. Raspberry Pi Camera would overheat and not connect leading us to remove the Telus IoT connectivity from our final project
2. Issues with incompatibilities with Mac and OpenCV and UiPath
3. Issues with lighting and lack of variety in training data leading to less accurate results.
## Accomplishments that I'm proud of
* Able to design and integrate the hardware with software and apply it to a mechanical application.
* Create data, train and deploy a working machine learning model
## What I learned
How to integrate simple low resource hardware systems with complex Machine Learning Algorithms.
## What's next for ASL Hand Bot
* expand beyond letters into words
* create a more dynamic user interface
* expand the dataset and models to incorporate more
|
### 🌟 Inspiration
We're inspired by the idea that emotions run deeper than a simple 'sad' or 'uplifting.' Our project was born from the realization that personalization is the key to managing emotional states effectively.
### 🤯🔍 What it does?
Our solution is an innovative platform that harnesses the power of AI and emotion recognition to create personalized Spotify playlists. It begins by analyzing a user's emotions, both from facial expressions and text input, to understand their current state of mind. We then use this emotional data, along with the user's music preferences, to curate a Spotify playlist that's tailored to their unique emotional needs.
What sets our solution apart is its ability to go beyond simplistic mood categorizations like 'happy' or 'sad.' We understand that emotions are nuanced, and our deep-thought algorithms ensure that the playlist doesn't worsen the user's emotional state but, rather, optimizes it. This means the music is not just a random collection; it's a therapeutic selection that can help users manage their emotions more effectively.
It's music therapy reimagined for the digital age, offering a new and more profound dimension in emotional support.
### 💡🛠💎 How we built it?
We crafted our project by combining advanced technologies and teamwork. We used Flask, Python, React, and TypeScript for the backend and frontend, alongside the Spotify and OpenAI APIs.
Our biggest challenge was integrating the Spotify API. When we faced issues with an existing wrapper, we created a custom solution to overcome the hurdle.
Throughout the process, our close collaboration allowed us to seamlessly blend emotion recognition, music curation, and user-friendly design, resulting in a platform that enhances emotional well-being through personalized music.
### 🧩🤔💡 Challenges we ran into
🔌 API Integration Complexities: We grappled with integrating and harmonizing multiple APIs.
🎭 Emotion Recognition Precision: Achieving high accuracy in emotion recognition was demanding.
📚 Algorithm Development: Crafting deep-thought algorithms required continuous refinement.
🌐 Cross-Platform Compatibility: Ensuring seamless functionality across devices was a technical challenge.
🔑 Custom Authorization Wrapper: Building a custom solution for Spotify API's authorization proved to be a major hurdle.
### 🏆🥇🎉 Accomplishments that we're proud of
#### Competition Win: 🥇
```
Our victory validates the effectiveness of our innovative project.
```
#### Functional Success: ✔️
```
The platform works seamlessly, delivering on its promise.
```
#### Overcoming Challenges: 🚀
```
Resilience in tackling API complexities and refining algorithms.
```
#### Cross-Platform Success: 🌐
```
Ensured a consistent experience across diverse devices.
```
#### Innovative Solutions: 🚧
```
Developed custom solutions, showcasing adaptability.
```
#### Positive User Impact: 🌟
```
Affirmed our platform's genuine enhancement of emotional well-being.
```
### 🧐📈🔎 What we learned
🛠 Tech Skills: We deepened our technical proficiency.
🤝 Teamwork: Collaboration and communication were key.
🚧 Problem Solving: Challenges pushed us to find innovative solutions.
🌟 User Focus: User feedback guided our development.
🚀 Innovation: We embraced creative thinking.
🌐 Global Impact: Technology can positively impact lives worldwide.
### 🌟👥🚀 What's next for Look 'n Listen
🚀 Scaling Up: Making our platform accessible to more users.
🔄 User Feedback: Continuous improvement based on user input.
🧠 Advanced AI: Integrating more advanced AI for better emotion understanding.
🎵 Enhanced Personalization: Tailoring the music therapy experience even more.
🤝 Partnerships: Collaborating with mental health professionals.
💻 Accessibility: Extending our platform to various devices and platforms.
|
partial
|
## Inspiration
At this stage of our lives, a lot of students haven’t yet developed the financial discipline to save money and tend to be wasteful with their spending. With this app, we hope to design an interface that focuses on minimalism. The app is easy to use and provides users with a visual breakdown of where their money is going to and from. This gives users a better idea of what their day-to-day spending habits look like and help them develop the necessary money saving skills that would be beneficial in the future.
## What it does
BreadBook enables users to input their expenses and income and categorize them chronologically from daily, monthly, weekly, to yearly perspectives. BreadBook also helps you visualize these finances across different time periods and assists you in budgeting properly throughout them.
## How we built it
This project was built using a simple web stack of Angular, Node.js and various Node libraries and packages. The back-end of the server is a simple REST api running on a Node.js express server that handles requests and allows the transmitting of data to the front-end. Our front-end was built using Angular and a few vfx packages such as chart.js.
## Accomplishments that we're proud of
Being able to implement various libraries of Angular and Node greatly helped us better understand our weaknesses and strengths as team members, and expanded our knowledge greatly regarding these technologies. Implementing chart.js to graphically show our data was a huge achievement given our limited experience with Angular modules.
## What we learned
Throughout the two day development process of our application, we all gained experience in using angular and what it allowed us to do in the creation of our web application. As a result, we all definitely became more comfortable with this framework, along with web development overall.
Our team decided to focus on the app functionalities right off the bat, as we all saw the potential and usefulness in our project idea and believed it should be our primary focus in the app’s development. As things progressed, we began to implement a cleaner UI and presentation aspect of the app as well, which was an entirely different realm of development. As a result, we all developed a better understanding of what to prioritize in the process of development as time is limited, as well as the importance in deciding whether or not to implement certain ideas based on their effort, required work and value to the project.
Finally one of the greatest parts about our participation in this event and being part of this project is the collaboration aspect. We can definitely all say we had an amazing experience from simply getting together, being creative and working in a group. This is especially different to us, as during this event, we created this project not as a school requirement, but through our own interests. It is when we work on projects like this that we are reminded of why we enjoy programming and the process of developing our ideas into something we can all use.
## What's next for BreadBook
The current state of BreadBook tracks all the day-to-day and recurring purchases that the user has made throughout daily, monthly or annual time periods. In the future, we would like to implement ways to identify or cut out unneeded speeding. We would give estimates on how much money could be saved daily/monthly/annually if this spending was reduced. We would also like to add a monthly spending plan that would allow you to allocate different amounts of money for different spending categories. When the spending limit of one or more of these categories is being approached a warning would be given to the user to ensure that they realize that they are near their limit.
|
Welcome to our demo video for our hack “Retro Readers”. This is a game created by our two man team including myself Shakir Alam and my friend Jacob Cardoso. We are both heading into our senior year at Dr. Frank J. Hayden Secondary School and enjoyed participating in our first hackathon ever, Hack The 6ix a tremendous amount.
We spent over a week brainstorming ideas for our first hackathon project and because we are both very comfortable with the idea of making, programming and designing with pygame, we decided to take it to the next level using modules that work with APIs and complex arrays.
Retro Readers was inspired by a social media post pertaining to another text font that was proven to help mitigate reading errors made by dyslexic readers. Jacob found OpenDyslexic which is an open-source text font that does exactly that.
The game consists of two overall gamemodes. These gamemodes aim towards an age group of mainly children and young children with dyslexia who are aiming to become better readers. We know that reading books is becoming less popular among the younger generation and so we decided to incentivize readers by providing them with a satisfying retro-style arcade reading game.
The first gamemode is a read and research style gamemode where the reader or player can press a key on their keyboard which leads to a python module calling a database of semi-sorted words from Wordnik API. The game then displays the word back to the reader and reads it aloud using a TTS module.
As for the second gamemode, we decided to incorporate a point system. Using the points the players can purchase unique customizables and visual modifications such as characters and backgrounds. This provides a little dopamine rush for the players for participating in a tougher gamemode.
The gamemode itself is a spelling type game where a random word is selected using the same python modules and API. Then a TTS module reads the selected word out loud for readers. The reader then must correctly spell the word to attain 5 points without seeing the word.
The task we found the most challenging was working with APIs as a lot of them were not deemed fit for our game. We had to scratch a few APIs off the list for incompatibility reasons. A few of these APIs include: Oxford Dictionary, WordsAPI and more.
Overall we found the game to be challenging in all the right places and we are highly satisfied with our final product. As for the future, we’d like to implement more reliable APIs and as for future hackathons (this being our first) we’d like to spend more time researching viable APIs for our project. And as far as business practicality goes, we see it as feasible to sell our game at a low price, including ads and/or pad cosmetics. We’d like to give a special shoutout to our friend Simon Orr allowing us to use 2 original music pieces for our game. Thank you for your time and thank you for this amazing opportunity.
|
# BananaExpress
A self-writing journal of your life, with superpowers!
We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling!
Features:
* User photo --> unique question about that photo based on 3 creative techniques
+ Real time question generation based on (real-time) user journaling (and the rest of their writing)!
+ Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question!
+ Question-corpus matching - we search for good questions about the user's current topics
* NLP on previous journal entries for sentiment analysis
I love our front end - we've re-imagined how easy and futuristic journaling can be :)
And, honestly, SO much more! Please come see!
♥️ from the Lotus team,
Theint, Henry, Jason, Kastan
|
losing
|
## Inspiration
We found that the current price of smart doors on the market is incredibly expensive. We wanted to improve the current technology of smart doors at a fraction of the price. In addition, smart locks are not usually hands free, either requiring the press of a button or going on the User's phone. We wanted to make it as easy and fast as possible for User's to securely unlock their door while blocking intruders.
## What it does
Our product acts as a smart door with two-factor authentication to allow entry. A camera cross-matches your face with an internal database and also uses voice recognition to confirm your identity. Furthermore, the smart door provides useful information for your departure such as weather, temperature and even control of the lights in your home. This way, you can decide how much to put on at the door even if you forgot to check, and you won't forget to turn off the lights when you leave the house.
## How we built it
For the facial recognition portion, we used a Python script & OpenCV through the Qualcomm Dragonboard 410c, where we trained the algorithm to recognize correct and wrong individuals. For the user interaction, we used the Google Home to talk to the User and allow for the vocal confirmation as well as control over all other actions. We then used an Arduino to control a motor that would open and close the door.
## Challenges we ran into
OpenCV was incredibly difficult to work with. We found that the setup on the Qualcomm board was not well documented and we ran into several errors.
## Accomplishments that we're proud of
We are proud of getting OpenCV to work flawlessly and providing a seamless integration between the Google Home, the Qualcomm board and the Arduino. Each part was well designed to work on its own, and allowed for relatively easy integration together.
## What we learned
We learned a lot about working with the Google Home and the Qualcomm board. More specifically, we learned about all the steps required to set up a Google Home, the processes needed to communicate with hardware, and many challenges when developing computer vision algorithms.
## What's next for Eye Lock
We plan to market this product extensively and see it in stores in the future!
|
## Inspiration
We wanted to learn about machine learning. There are thousands of sliding doors made by Black & Decker and they're all capable of sending data about the door. With this much data, the natural thing to consider is a machine learning algorithm that can figure out ahead of time when a door is broken, and how it can be fixed. This way, we can use an app to send a technician a notification when a door is predicted to be broken. Since technicians are very expensive for large corporations, something like this can save a lot of time, and money that would otherwise be spent with the technician figuring out if a door is broken, and what's wrong with it.
## What it does
DoorHero takes attributes (eg. motor speed) from sliding doors and determines if there is a problem with the door. If it detects a problem, DoorHero will suggest a fix for the problem.
## How we built it
DoorHero uses a Tensorflow Classification Neural Network to determine fixes for doors. Since we didn't have actual sliding doors at the hackathon, we simulated data and fixes. For example, we'd assign high motor speed to one row of data, and label it as a door with a problem with the motor, or we'd assign normal attributes for a row of data and label it as a working door.
The server is built using Flask and runs on [Floydhub](https://floydhub.com). It has a Tensorflow Neural Network that was trained with the simulated data. The data is simulated in an Android app. The app generates the mock data, then sends it to the server. The server evaluates the data based on what it was trained with, adds the new data to its logs and training data, then responds with the fix it has predicted.
The android app takes the response, and displays it, along with the mock data it sent.
In short, an Android app simulates the opening and closing of a door and generates mock data about the door, which is sends everytime the door "opens", to a server using a Flask REST API. The server has a trained Tensorflow Neural Network, which evaluates the data and responds with either "No Problems" if it finds the data to be normal, or a fix suggestion if it finds that the door has an issue with it.
## Challenges we ran into
The hardest parts were:
* Simulating data (with no background in sliding doors, the concept of sliding doors sending data was pretty abstract).
* Learning how to use machine learning (turns out this isn't so easy) and implement tensorflow
* Running tensorflow on a live server.
## Accomplishments that we're proud of
## What we learned
* A lot about modern day sliding doors
* The basics of machine learning with tensorflow
* Discovered floydhub
## What we could have improve on
There are several things we could've done (and wanted to do) but either didn't have time or didn't have enough data to. ie:
* Instead of predicting a fix and returning it, the server can predict a set of potential fixes in order of likelihood, then send them to the technician who can look into each suggestion, and select the suggestion that worked. This way, the neural network could've learned a lot faster over time. (Currently, it adds the predicted fix to its training data, which would make for bad results
* Instead of having a fixed set of door "problems" for the door, we could have built the app so that in the beginning, when the neural network hasn't learned yet, it asks the technician for input after everytime the fix the door (So it can learn without data we simulated as this is what would have to happen in the normal environment)
* We could have made a much better interface for the app
* We could have added support for a wider variety of doors (eg. different models of sliding doors)
* We could have had a more secure (encrypted) data transfer method
* We could have had a larger set of attributes for the door
* We could have factored more into decisions (for example, detecting a problem if a door opens, but never closes).
|
## Inspiration
Tinder but Volunteering
## What it does
Connects people to volunteering organizations. Makes volunteering fun, easy and social
## How we built it
react for web and react native
## Challenges we ran into
So MANY
## Accomplishments that we're proud of
Getting a really solid idea and a decent UI
## What we learned
SO MUCH
## What's next for hackMIT
|
partial
|
## 📖✈️Inspiration
The past year and a half our team has been quarantined in our homes, dreaming about the day in which we would get to travel the world and explore. We made the most of our new lives by studying, working, starting new workout routines, cooking and of course watching ALOT of TikTok.
Believe it or not though, when all those things got too boring, we actually picked up a book and read. In fact, according to Global English Editing worldwide we've seen a 35% of individuals reading more during the pandemic.
So our team wanted to find a way to capitalize on both the incoming travel industry boom and the recent upward trends in reading. Something we can do by helping our users pick the perfect next travel destination based on their recent reading list.
Not only will this push individuals to travel to new places, but it can help remove the stress and anxiety around choosing a travel destination. With the world's most popular destinations likely to rise in price and congestion, new suggestions will be ever so important.
So join the movement, and **livre abroad**!
## 🏗️What it does
LivreAbroad is a web platform that allows users to record a list of books and receive a personalized list of the top travel destinations for them based on their reading. Not only will users receive amazing travel recommendations but our application will auto populate an Pinterest board with pictures of the locations we recommend for them to visit.
## 🔨 How we built it
* **Frontend:** built in React using Material UI and deployed on Microsoft Cloud Services (Azure)
* **Backend:** Python backend
* **Database:** CockroachDB
* **API:**Hootsuite API
* **Machine Learning: Tensorflow**
* **Branding and Design:** Figma
LivreAbroad is integrated with the Hootsuite API to take advantage of its unique and easy-to-use features to make our application seamless and fully integrated into our user's life by connecting to their social media accounts. After our application makes a recommendation it adds a picture of our location to a unique Pinterest board automatically. This way the user can see all the places they are recommended. We can further expand our integrations with Hootsuite by sending Facebook messages to friends of the user to invite them on these trips. As you can see on the Hootsuite dashboard our application has scheduled to post these travel locations to my Pinterest Board.
LivreAbroad uses NLTK’s natural language processing library as well as Tensorflow Machine Learning to generate travel destinations based on your recent reads. We used CockroachDB to store each user’s books, as well as our list of cities mapped to adjectives and genres that match that city. Storing the genres and cities with CockroachDB made it really easy to get a list of city recommendations based on the genres of the books inputted. And we stored the cities in Cockroach along with their adjectives so that we could use our Machine Learning Algorithm to determine genres for cities without them.
## 🚫 Challenges we ran into
* working with the Hootsuite API to post images on Pinterest
* building out the logic for location recommendation modelling
* scraping book information
## 🏆Accomplishments that we're proud of
* the way we work together, and the fun we had as a team
* creating a working product that we might even use ourselves
## 🧠 What we learned
* Sleep and balance is important
* To check university class deadlines prior to the weekend
## 🔮What's next for LivreAbroad
It's in your hands! Next time you're looking for a new travel location, try LivreAbroad and see what amazing adventures will sprout from our recommendations.
With that in mind, we'd love to build in the ability for users to input the places they have travelled and then producing recommendations of books they can read. Pictures of these books would also be automatically posted to a Pinterest Board as well for the user. We would also provide them with links to the GoodReads summary and can also be integrated with Pinterest's API and Goodreads API. We also can further use Hootsuite to message friends about their travel plans and invite them to facilitate groups trips.
|
## Inspiration
Traveling can often be stressful, with countless variables to consider such as budget, destinations, activities, and personal interests. We aimed to create an app that simplifies travel planning, making it more enjoyable and personalized. Marco.ai was inspired by the need for a comprehensive travel companion that not only provides an engaging way to 'match' with various trip components but also offers real-time, personalized recommendations based on your location, ensuring users can make the most out of their trips.
## What it does
Marco.AI uses your geolocation on a trip to find food and activities that might be of interest to you. In comparison to competitors like Expedia and Google Search, Marco.AI isinputted with personalized data and your present location and provides live data based on your past preferences and adventures! After each experience, we ask for a 1-10 rating and use an initial survey to store a profile with your preferences.
## How we built it
To build Marco.ai, we integrated You.com, Groq Llama3 8b, and GPT-4o APIs. For the backend, we utilized python to handle user data, travel plans, and interactions with third-party APIs in a JSON format. For each experience, our model generates keywords relating to a 1-10 rating of the experience. Ex: a 10/10 for beach would have keywords like "ocean, calm, relaxing." For the mobile app frontend, we chose React Native to connect to our model output and present an easy to use interface. Python's standard libraries for handling JSON data were employed to parse and save AI-generated recommendations. Additionally, we implemented functionalities to dynamically update user ratings and preferences, ensuring the app remains relevant and personalized.
## Challenges we ran into
As first-time hackers, we definitely ran into a few obstacles. Combining multiple APIs and learning how they work took a while to figure out and implement into our model. We started off by using mindsDB and trying to utilize a RAG model with You.com. However we realized that for our purpose, we didn't need to use a largescale model management and platform and decided to move towards using prompt engineering. Engineering our prompts for GPT-40 was a back and forth process of learning how to properly utilize the AI to give us our output in a formatted way, making it easier to parse.
The most challenging aspect for our team was frontend design. This was our first experience with app development, and the learning curve was steep. We are happy that we are able to provide a functional prototype that can already be used by people while they plan a trip!
## What's next for marco.ai
In the future we hope to bring this model to life with payment integration and a feature to be able to swipe through and save different elements of your trip: hotel, flights, food based on your interests and budgets then pay at the end. Additionally, we aspire to transform Marco.ai into a social platform where users can share their past vacation experiences, likes, dislikes, and recommendations, creating a vibrant community of travel enthusiasts! Marco aims to pioneer a social app focused on encompassing travel experiences, filling a gap that has yet to be explored in the social media landscape.
|
## Inspiration
As a startup founder, it is often difficult to raise money, but the amount of equity that is given up can be alarming for people who are unsure if they want the gasoline of traditional venture capital. With VentureBits, startup founders take a royalty deal and dictate exactly the amount of money they are comfortable raising. Also, everyone can take risks on startups as there are virtually no starting minimums to invest.
## What it does
VentureBits allows consumers to browse a plethora of early stage startups that are looking for funding. In exchange for giving them money anonymously, the investors will gain access to a royalty deal proportional to the amount of money they've put into a company's fund. Investors can support their favorite founders every month with a subscription, or they can stop giving money to less promising companies at any time. VentureBits also allows startup founders who feel competent to raise just enough money to sustain them and work full-time as well as their teams without losing a lot of long term value via an equity deal.
## How we built it
We drew out the schematics on the whiteboards after coming up with the idea at YHack. We thought about our own experiences as founders and used that to guide the UX design.
## Challenges we ran into
We ran into challenges with finance APIs as we were not familiar with them. A lot of finance APIs require approval to use in any official capacity outside of pure testing.
## Accomplishments that we're proud of
We're proud that we were able to create flows for our app and even get a lot of our app implemented in react native. We also began to work on structuring the data for all of the companies on the network in firebase.
## What we learned
We learned that finance backends and logic to manage small payments and crypto payments can take a lot of time and a lot of fees. It is a hot space to be in, but ultimately one that requires a lot of research and careful study.
## What's next for VentureBits
We plan to see where the project takes us if we run it by some people in the community who may be our target demographic.
|
losing
|

## Project Purpose:
Everyone has been in a situation where they need to talk to someone. Whether to unload the burden or ask for trusted advice, but does everyone have someone they trust enough to always talk to? Unfortunately, it doesn’t seem like it.
### Let the Data Speak:
* **Fig 1:** Current generation reports the highest levels of loneliness (79%)

* **Fig 2:** Number of close friendships significantly declined from 1990 to 2021

* **Fig 3:** The most alarming trend: the number of suicides has been rising since the 2000s

## Project Motivation:
I am an international student. Making friends was always hard, and I often felt isolated. I had no one to talk to about it, except my ex-girlfriend, but the long-distance relationship didn’t survive due to the distance. This added to my plate. I didn’t have close friends, and therapy was either expensive or had a long waitlist when provided by school. I found myself talking to ChatGPT about my feelings. I noticed it really helped me get things off my chest and shift from a dramatic point of view to a more realistic one. ChatGPT helped me overcome moments of isolation and emotional overloads.
However, there were a couple of issues. Every time I started talking to it, it had no idea who I was. It gave me generic responses until I shared more details deeper into the conversation. But looking back on these crisis moments, I believe ChatGPT helped me avoid having more worries and anxiety today.
## Conceptual Key Features:
I identified four main issues from my experience interacting with it in crisis moments, which I aim to solve in this project:
1. Lack of long-term memory.
2. Lack of short-term relevance.
3. Its communication style being more "chat-botish" instead of "humanish."
4. It never checked in on me.
These four factors are crucial to make it feel less like a generic chatbot and more like a caring friend.
## Technical Key Features:
1. **Long-Term Memory:**
* **Dynamic Categorical Memory:** After each interaction, tailored GPT agents update a set of summaries related to aspects of life:
`text
core_values_and_beliefs, mental_and_emotional_well_being, family_relationships, health_issues, personal_background,
aspirations_and_fears, profile_summary, strengths_and_weaknesses, romantic_relationships, social_circle_dynamics,
daily_routines, work_environment_and_dynamics, most_recent_challenges, most_recent_accomplishments,
emotional_triggers, communication_style, hobbies_and_interests, personal_development_and_skills,
financial_situation_and_goals, academic_performance_and_experiences, social_life_and_friendships,
physical_health_and_lifestyle, personal_interests_and_hobbies, past_traumas`
* The existing summary in each category is updated from the current interaction. Once updated, a master summary is generated, which guides the next interaction.
2. **Short-Term Relevance:**
* **Targeted Retrieval:** During conversation, a specialized GPT agent decides whether pulling up any categorical summaries is relevant. It can return an empty list or a list of three most relevant summaries. If not empty, a summarizing agent integrates them into the conversation seamlessly, providing GPT with relevant information.
3. **Chatbot Style Improvement:**
* **Three Layers of Humanization:**
1. **Master Prompt:** Directs GPT to respond in a human-like, “SMS-chat-with-friend” style.
2. **Personal User Communication Preference:** Pulled from the "communication\_style" category and dynamically updated after each interaction.
3. **Humanize Filter:** This top-layer filter ensures the response style is relevant to the last ten messages, splits long answers into multiple messages, and removes unnecessary periods to emulate human SMS texting.
4. **Check-In Feature:**
* **Check-In Scheduler:** After each interaction, this agent evaluates if a check-in message would be beneficial. It generates a check-in message and schedules a follow-up (within 1-24 hours) based on the context. The check-in message is sent to the user at the scheduled time.
## Goals:
The COVID vaccine does not guarantee that you will never get COVID. Similarly, this bot does not promise to completely eliminate the issues of loneliness and isolation.
Just as the vaccine lowers the risk of severe complications from the virus, the goal of this project is to lower the levels of loneliness and isolation, potentially preventing the tragedy of loneliness-related suicides.
## Challenges:
* Prompt engineering was a huge one. The project contains 35 prompts for 10 tailored agents. Writing them was not easy to ensure the desired outcome from the agents.
* The biggest one was to make GPT talk in a human-like SMS conversation style. It took me multiple layers (3) and many failed prompts to craft a style that feels human.
* It was hard to ensure that the LLM agents return the output in the expected format, which could further be used in standard algorithms.
## Accomplishments That I'm Proud Of:
* I think it talks like a caring friend. Many prompts were tested to ensure that. As of right now, the style is nice and mellow.
* Targeted retrieval is analyzing what aspects of memory will be beneficial for the current convo, pulling, summarizing, and injecting them. Pretty sick.
* Check-in scheduling is a nice and needed feature. Analyze the convo and schedule a check-in, almost like a doctor.
## What I Learned:
* Prompt engineering 101.
* How to make AI LLM agents and ensure that the output from them is in a required specific format.
* How to make 10 agents work together cohesively: generate summaries, and based on generated summaries, schedule and insert check-ins into the database table.
## Future Vectors:
* Send memes to the user.
* Improve targeted retrieval call logic.
* Add scheduled user suggestions for deeper communication style personalization, such as: "How is my conversation style so far? Is it too optimal/dull? I can adjust."
* Check-in scheduling: Improve the time scheduling mechanism.
|
## Inspiration🎓
To make the context clear, we are three students who believe that meaningful work facilitates self-content, inspiration and projects that either help others or ourselves. Since we joined university(we are all first-years), we did not find what we did so meaningful anymore. We had great expectations since we were all enrolled in a top 100 university. Still, we were surprised to see how little they cared about our creativity, ideas, and overall potential. Things were too theoretical; some answers in different disciplines, which turned out to be correct, **were dismissed because they were not based on ”the old ways”**.
**Also, there was no easy way to cooperate with other disciplines.** Unless you go out partying, and by some luck, you are not an introvert, there is no easy way to meet someone. For example, maybe as a business student, you cannot really meet someone from computer science or design so easily. Unless you go out of your way, there is no simple way to meet like-minded people that could help you create something meaningful.
Obviously, there are apps such as Instagram, Facebook, Linkedin, Reddit and fiver. Though, the way they are created and marketed, there is not much chance you find what you need. Almost no one answers messages from strangers on Instagram or Facebook. On LinkedIn, people are there for internships and money, pretending to be some fancy intellectual. Reddit is not really used by that many students, not as a way to meet students anyways, and if you need a specialist on fiver for anything unless you are rich, good luck with paying him.
We want to create a safe place for us, young individuals, who want something more than base our happiness on exam grades, work for a corporation for our entire life after 3-5 years of grinding for good marks, for subjects that don’t necessarily help you at anything or where we can meet students from all the disciplines, not only our own.
**Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid”-actually not by Albert Einstein.**
We strongly advise people to pursue a university; we are not against it, though we advise not to base your life upon it; it’s great as a safety net and learning place. We don’t like the old premise of you will do nothing with your life unless you finish your degree we want to truly bring the youth to their true potential, not inhibit it. Each year, the crisis of meaning expands, especially in young individuals. If not give rebirth to it, we would love to contribute to providing students with another chance to create, develop and meet other great colleagues
## What it does🔧
**StudHub** is an app with different features that facilitate flow in terms of building projects, meeting open-minded individuals and discussing with and helping fellow students.
**The main features** will be the main page, creating a post(with a specific template) and a search bar. The main page will consist of projects posted based on a template that doesn’t reveal too much, though enough to make people get the idea. For example, you are a computer science student with a great app idea. You are a great back-end developer though you know nothing in UI/UX design; respectively, you are pretty introverted and don’t know how to actually create a brand out of it as well. By posting your idea succinctly and selecting the skills you need for your projects, people with those specific skills, if they like your idea, can send you a request to get in touch with you. On the other hand, you can also search for these people in the search bar.
As stated above, if you are a creative individual, you can post your ideas and find the people you need for projects. On the other hand, if you want to do something meaningful with your free time but you are sick of volunteering work that just gives you a diploma and a line in your cv, this is a place for you too!
**Other Features** consist of:
1. “Reddit-style” thread (we are still developing/designing our own style for this), where you can ask questions, discuss different topics, and search for answers to whatever questions you have. It can be mostly based around student problems, passions, anything really! We want to create a safe space for meeting other students too!
2. Messages
3. Notifications
4. Profile (where you can select your skills and write about yourself; we want the real you! We’ll try to make sure no prospective employer can judge you based on that!
5. Learning environment; in the future, here, we will create courses on practical topics that are important but not so often taught in university. (public speaking, leadership, etc.)
We initially viewed it as a website. We have started development on Firebase. Though during the hackathon we realized that our main target demographic(Students and young individuals) uses mostly their phones, which meant an app would be significantly better and preferred. Then we started working on Flutter.
## How we built it👨🎨
Using the Flutter framework, we built a cross-platform app (Android/IOS/Web). For the backend, we used Firebase as a provider, its Authentication module, and also Firebase Firestore for our NoSQL database.
## Challenges we ran into⚖️
The post feature is the greatest challenge, which is still in discussion. We want to build a template and guideline as straightforward as possible that does not disclose too many of the ideas that someone will just take it. We are thinking about how to implement NDAs when you get it to discuss with interested people, but right now, the focus is to create a working MVP.
The overall concept is pretty hard to explain without looking like we are against universities. We genuinely find them very important to one's individual development. We are actively working to find a way to tell our story and create a brand that supports creativity and the individual while not sounding too much against the traditional model.
Also, due to the many features we want to add, it is a constant struggle to make everything as efficient, aesthetic and easy as possible. We are pretty much in an age of attention deficit, so we want to build something neat and user-friendly.
## Accomplishments that we're proud of🧐
We are genuinely surprised by the fact that our app constantly worked almost without any problems throughout the whole 36h.
We have never fought on any subject; we constantly built on each other's ideas and realized that we work lovely as a team. Any problems we had were discussed calmly and found a middle way.
## What we learned 👩🏫
A new programming language we had never worked with Dart before until the start of the Hackathon!
A new framework, it was our first time on Flutter before either!
How to listen to each other more without interrupting. And most importantly, working without sleep for 36 hours and a lot of caffeine! :D
## What's next for StudHub🚀
We have discussed this thoroughly, and after the Hackathon, we need to find an affordable UX/UI designer (ironically enough, our app would have been so helpful for this) to either join our project or at least build a starting page where we can have a subscription list, where we could make the "proof of concept" for our idea.
Eventually, we will send emails with updates on the project. During this Hackathon, we realized how in love we are with this idea, and we want to continue building on it and eventually launching it.
For now, our focus will be to design it as efficient, aesthetic and user-friendly and make it functional. We want to go through what we built during the Hackathon to analyze what we did out of speed and what we want to keep. There are many things that need more thought.
We also want to add "Meet your Mentor"! In an even further future, we want to implement a feature where our dear users can talk with experienced people from a diverse array of subjects, ask for advice on a making project, open a business, start research, etc.
We hope that our concept will catch on and become a big trend, eventually leading to many young individuals with great potential to meet each other and build beautiful things! I personally feel lucky to have met the people I have worked within this Hackathon because they became some of my best friends that I spend much time with. We hope to take the luck element out and make it easier for people to meet such beautiful people, that not only they will create cool projects and businesses, but they will become maybe friends for life.
Do not forget, creating stuff is cool but meeting great people is sometimes even cooler!
|
## Inspiration
There is a need for an electronic health record (EHR) system that is secure, accessible, and user-friendly. Currently, hundred of EHRs exist and different clinical practices may use different systems. If a patient requires an emergency visit to a certain physician, the physician may be unable to access important records and patient information efficiently, requiring extra time and resources that strain the healthcare system. This is especially true for patients traveling abroad where doctors from different countries may be unable to access a centralized healthcare database in another.
In addition, there is a strong potential to utilize the data available for improved analytics. In a clinical consultation, patient description of symptoms may be ambiguous and doctors often want to monitor the patient's symptoms for an extended period. With limited resources, this is impossible outside of an acute care unit in a hospital. As access to the internet is becoming increasingly widespread, patients may be able to self-report certain symptoms through a web portal if such an EHR exists. With a large amount of patient data, artificial intelligence techniques can be used to analyze the similarity of patients to predict certain outcomes before adverse events happen such that intervention can occur timely.
## What it does
myHealthTech is a block-chain EHR system that has a user-friendly interface for patients and health care providers to record patient information such as clinical visitation history, lab test results, and self-reporting records from the patient. The system is a web application that is accessible from any end user that is approved by the patient. Thus, doctors in different clinics can access essential information in an efficient manner. With the block-chain architecture compared to traditional databases, patient data is stored securely and anonymously in a decentralized manner such that third parties cannot access the encrypted information.
Artificial intelligence methods are used to analyze patient data for prognostication of adverse events. For instance, a patient's reported mood scores are compared to a database of similar patients that have resulted in self-harm, and myHealthTech will compute a probability that the patient will trend towards a self-harm event. This allows healthcare providers to monitor and intervene if an adverse event is predicted.
## How we built it
The block-chain EHR architecture was written in solidity, truffle, testRPC, and remix. The web interface was written in HTML5, CSS3, and JavaScript. The artificial intelligence predictive behavior engine was written in python.
## Challenges we ran into
The greatest challenge was integrating the back-end and front-end components. We had challenges linking smart contracts to the web UI and executing the artificial intelligence engine from a web interface. Several of these challenges require compatibility troubleshooting and running a centralized python server, which will be implemented in a consistent environment when this project is developed further.
## Accomplishments that we're proud of
We are proud of working with novel architecture and technology, providing a solution to solve common EHR problems in design, functionality, and implementation of data.
## What we learned
We learned the value of leveraging the strengths of different team members from design to programming and math in order to advance the technology of EHRs.
## What's next for myHealthTech?
Next is the addition of additional self-reporting fields to increase the robustness of the artificial intelligence engine. In the case of depression, there are clinical standards from the Diagnostics and Statistical Manual that identify markers of depression such as mood level, confidence, energy, and feeling of guilt. By monitoring these values for individuals that have recovered, are depressed, or inflict self-harm, the AI engine can predict the behavior of new individuals much stronger by logistically regressing the data and use a deep learning approach.
There is an issue with the inconvenience of reporting symptoms. Hence, a logical next step would be to implement smart home technology, such as an Amazon Echo, for the patient to interact with for self reporting. For instance, when the patient is at home, the Amazon Echo will prompt the patient and ask "What would you rate your mood today? What would you rate your energy today?" and record the data in the patient's self reporting records on myHealthTech.
These improvements would further the capability of myHealthTech of being a highly dynamic EHR with strong analytical capabilitys to understand and predict the outcome of patients to improve treatment options.
|
losing
|
## Inspiration
dwarf fortress and stardew valley
## What it does
simulates farming
## How we built it
quickly
## Challenges we ran into
learning how to farm
## Accomplishments that we're proud of
making a frickin gaem
## What we learned
games are hard
farming is harder
## What's next for soilio
make it better
|
# CourseAI: AI-Powered Personalized Learning Paths
## Inspiration
CourseAI was born from the challenges of self-directed learning in our information-rich world. We recognized that the issue isn't a lack of resources, but rather how to effectively navigate and utilize them. This inspired us to leverage AI to create personalized learning experiences, making quality education accessible to everyone.
## What it does
CourseAI is an innovative platform that creates personalized course schedules on any topic, tailored to the user's time frame and desired depth of study. Users input what they want to learn, their available time, and preferred level of complexity. Our AI then curates the best online resources into a structured, adaptable learning path. Key features include:
* AI-driven content curation from across the web
* Personalized scheduling based on user preferences
* Interactive course customization through an intuitive button-based interface
* Multi-format content integration (articles, videos, interactive exercises)
* Progress tracking with checkboxes for completed topics
* Adaptive learning paths that evolve based on user progress
## How we built it
We developed CourseAI using a modern, scalable tech stack:
* Frontend: React.js for a responsive and interactive user interface
* Backend Server: Node.js to handle API requests and serve the frontend
* AI Model Backend: Python for its robust machine learning libraries and natural language processing capabilities
* Database: MongoDB for flexible, document-based storage of user data and course structures
* APIs: Integration with various educational content providers and web scraping for resource curation
The AI model uses advanced NLP techniques to curate relevant content, and generate optimized learning schedules. We implemented machine learning algorithms for content quality assessment and personalized recommendations.
## Challenges we ran into
1. API Cost Management: Optimizing API usage for content curation while maintaining cost-effectiveness.
2. Complex Scheduling Logic: Creating nested schedules that accommodate various learning styles and content types.
3. Integration Complexity: Seamlessly integrating diverse content types into a cohesive learning experience.
4. Resource Scoring: Developing an effective system to evaluate and rank educational resources.
5. User Interface Design: Creating an intuitive, button-based interface for course customization that balances simplicity with functionality.
## Accomplishments that we're proud of
1. High Accuracy: Achieving a 95+% accuracy rate in content relevance and schedule optimization.
2. Elegant User Experience: Designing a clean, intuitive interface with easy-to-use buttons for course customization.
3. Premium Content Curation: Consistently sourcing high-quality learning materials through our AI.
4. Scalable Architecture: Building a robust system capable of handling a growing user base and expanding content library.
5. Adaptive Learning: Implementing a flexible system that allows users to easily modify their learning path as they progress.
## What we learned
This project provided valuable insights into:
* The intricacies of AI-driven content curation and scheduling
* Balancing user preferences with optimal learning strategies
* The importance of UX design in educational technology
* Challenges in integrating diverse content types into a cohesive learning experience
* The complexities of building adaptive learning systems
* The value of user-friendly interfaces in promoting engagement and learning efficiency
## What's next for CourseAI
Our future plans include:
1. NFT Certification: Implementing blockchain-based certificates for completed courses.
2. Adaptive Scheduling: Developing a system for managing backlogs and automatically adjusting schedules when users miss sessions.
3. Enterprise Solutions: Creating a customizable version of CourseAI for company-specific training.
4. Advanced Personalization: Implementing more sophisticated AI models for further personalization of learning paths.
5. Mobile App Development: Creating native mobile apps for iOS and Android.
6. Gamification: Introducing game-like elements to increase motivation and engagement.
7. Peer Learning Features: Developing functionality for users to connect with others studying similar topics.
With these enhancements, we aim to make CourseAI the go-to platform for personalized, AI-driven learning experiences, revolutionizing education and personal growth.
|
## What it does
"ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points.
## How We built it
Our project is comprised of many interconnected components, which we detail below:
#### Formatting Engine
To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required.
#### Voice-to-speech
We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed.
#### Topic Analysis
Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification.
#### Image Scraping
Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen.
#### Graph Generation
Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time.
#### Sentence Segmentation
When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline.
#### Text Title-ification
Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title.
#### Text Summarization
When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous.
#### Mobile Clicker
Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets.
#### Internal Socket Communication
In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides.
## Challenges We ran into
* Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis.
* The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop.
## Accomplishments that we're proud of
* Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques.
* Working on an unsolved machine learning problem (sentence simplification)
* Connecting a mobile device to the laptop browser’s mic using WebSockets
* Real-time text analysis to determine new elements
## What's next for ImpromPPTX
* Predict what the user intends to say next
* Scraping Primary sources to automatically add citations and definitions.
* Improving text summarization with word reordering and synonym analysis.
|
partial
|
## Inspiration
As we began to look at the TreeHacks 10 tracks, all team members were immediately drawn to the sustainability track. In a world with increasing temperatures, excessive greenhouse gas emissions, biodiversity loss, and pollution, among numerous other ecological challenges, we know we all have an individual responsibility to help preserve and revitalize our environment. As a result, we began brainstorming how we could individually help contribute to a more sustainable future. Our first thoughts centered around how we could encourage contributions to environmental nonprofits. Still, we struggled to name localized organizations that could impact on an individual scale.
With three of us originally from Iowa, we did a quick Google search to find potential organizations whose mission aligned with our goal and found over 20 (including 3 within 20 minutes of our hometown) around the state that could utilize resources from people in various ways. The contributions they were seeking primarily consisted of people volunteering and monetary donations. If this was the case in Iowa, we knew most other states would likely have even more available opportunities. But how could we make people aware of them? Looking at the communities of people we know, it’s clear there is no shortage of people interested in environmental sustainability. But just being passionate about an issue doesn’t lead to improvement. A streamlined way to identify tangible ways to catalyze change, though? That is what’s needed to bridge the gap between someone’s desire to make change and their ability to follow through. We realized our platform’s goal: to allow organizations to make themselves known to those people who already have a planted **spark** and want to help preserve their environment for future generations.
## What it does
**Your spark can create change.**
Spark is a platform that allows environmental organizations to create a campaign outlining their mission, vision, and goals to encourage people with an existing **spark** who don’t know what to do with their desire to make a difference to join their projects. Our platform works in two parts. First, organizations post their campaign, which is then added to a database holding all posted campaigns. Next, contributors can browse available campaigns to find one(s) that resonate with their goals. Once they identify organizations that do so, they can identify which of the organization’s goals they are inspired to contribute to and gain spark points. These spark points work to (1) allow contributors to see the tangible impact they are having as a continuous endeavor and (2) motivate these individuals to continue their contributions with more organizations.
## How we built it
Iterations:
1. We started with a basic outline of listing an organization and its needs and allowing an individual to sign up to help.
2. To explore the broader stakeholders beyond just contributors, we spoke to an Executive Director at a nonprofit local to us (someone who may make a campaign page). We learned what features would make this platform more useful for them:
* “Because it is so hard for nonprofits to receive funding [as the application process is often long and rarely fruitful because of the number of competitors], individual contributions go a long way,” so we made the monetary donation aspect the first built-out type of contribution with future plans to build out a page showing all volunteering opportunities
* Within the organization’s dashboard view, they should be able to view and manage all of their own campaigns, so we added this functionality
* “Nonprofits benefit greatly from being able to receive feedback from participants,” so in a future iteration, we hope to allow some form of communication between participants and the organization (if valuable and often not used, this may be required for someone to earn their spark points)
3. We then showed our product to a hackathon mentor to gain more feedback on how to address the pain points of a potential user
* She suggested the usefulness of being able to “visually” observe opportunities “physically nearby.” We used this feedback and incorporated the Google Maps API to display the physical locations of the opportunities. She also noted this would remind users “how accessible” it is to make change.
4. After speaking to friends (people who would hold a future contributor role), we added in a few more features that could make our platform better encourage people:
* A link to the non-profit website, if applicable, to allow for deeper learning. To encourage an easy-to-use UI (especially for those more tech-averse), we wanted to avoid a cluttered card and instead redirect contributors to the organization's website.
* The ability to favorite a non-profit for future engagements, which we hold as a goal for a future iteration
* Rotating information on our home page to serve as motivation for contributors (also not yet implemented but planned for next iteration)
Technology Used:
(1) We used Next.js to build and host the front-end portion of our application. This decision allowed us to scale easily with a growing user base. Next.js is a very popular framework with a lot of open-source support that made our ability to build a website quickly.
(2) We use Convex for our API and backend. We enjoyed their presentation during the opening ceremony, which convinced us to use its extensive functionality. Its lightweight nature helped us develop much more quickly than what would’ve been required with other software.
(3) Ant Design is a very popular UI framework and made it easy to translate our Figma designs into our final product with a clean, modern interface.
(4) Visual Studio Code + Extensions to make development environment easier
## What makes us different
The main features of Spark that set it apart from existing services can be grouped into three main parts:
1. The focus on individual contributions to environmental challenges
* We couldn’t find any existing websites that were focused on sustainability. Many environmental organizations had their donation and/or funding pages in hard to find places and with little tangible impact associated with it.
2. The motivation through point earning
* This feature is a unique motivator we didn’t find in other platforms. People like to do things when they feel like it is worthwhile. Providing an ability to track the “significance” of their cumulative – not just one-time – contributions does precisely that.
3. Allowing individuals to see that their monetary or physical donations are tied to specific goals, not just a cause
* By having organizations outline why they’re asking for contributions in a certain manner, people are more aware of their individual part in these large-scale problems. A general donation fund or volunteering list is less valuable for contributing individuals.
## Challenges we ran into
* Configuration and integration challenges like getting things to talk to each other, installing libraries
* Agreeing with design and idea choices like UI, brand, and features
* Working through exhaustion, stress, and frustration at times
* Navigating a new environment of learning and networking
## Accomplishments that we're proud of
* Our app successfully makes roundtrips! Data is rendered from the database to our front end, and we can successfully demonstrate our MVP
* 3/4 of our team's first hackathon project!
* We met new people with great ideas and enjoyed sharing them throughout the weekend
* We got the chance to learn and experiment with technologies new to us (Next and Convex) and were successful in making them work
* We had fun!!!
* We were a successful team and enjoyed collaborating together :D
## What we learned
About Sustainability:
(1) What do nonprofits and organizations need when looking for support?
* Access to a large user base: This can be especially key for smaller organizations and the funding of their projects
* Passionate contributors: they are the key to spreading ideas through word of mouth, and this is our target demographic for our platform
(2) How beneficial individual change can be
* Ecological organizations have already done the research: they know what needs to be done to improve our environments. Once they’ve identified useful ways to use people, getting people to them is the new important goal.
* The more involved people are individually, the better equipped they are to elect representatives that can further change on a more national and even global scale. Individuals spark greater contributions.
About Technology:
(3) The web development space is constantly evolving
* Many tools out there are robust for scaling applications with growing users. Frameworks for the backend like Convex make spinning up a cloud server a breeze, with frameworks like Next.js ensuring that front-end applications are production-ready.
* It’s crucial that before starting a project, the needs of the project are evaluated to find the right tech stack to handle them. Additionally, when encountering bugs or issues, evaluate the simplest potential culprits first, as the technologies being used are well-tested and are unlikely to be the issue.
## What's next for Spark
We want to see Spark develop into a general platform for all kinds of organizations and allow them to receive support in ways not currently built into the website. While our motivation started out with achieving improvements in ecology, we learned that many nonprofits and small organizations also struggle with their day-to-day costs for small things, even as simple as needed plates or cups. Schools have underprivileged students who struggle to receive the school supplies they need. Both of these cases could be solved by having an additional contribution mode: purchasing individual items.
Spark has the potential to become \_ the \_ platform for social good: you want to help a specific industry, that industry, and your ability to contribute is \_ literally \_ at your fingertips.
Once this generalization is implemented, we hope to add a recommendation feature that will allow contributors to be matched with projects that they will likely find fulfilling based on their previous interests and engagements.
## Broader Stakeholders, Context, and Ethicality
### Accessibility:
While this is hosted online, the requirements of people are nothing other than their time or money. While money can be a barrier to contributing to projects one resonates with, our platform encourages people to give their time if that is more accessible to them. While not everyone has access to the internet from their homes, they can easily access this platform from a public source (ex., library), allowing them to make individual contributions in whichever way they see as most suitable to their desire and ability.
Our “add a campaign” process is extremely simple for an organization that may not have tech-savvy employees. Organizations have to provide as little information as they want to, and in a few simple clicks, they will be listed with minimal technology required. Additionally, existing organizations may see Spark as a competition. Still, Spark works to elevate the organization’s existing issues to be helped by a broader range of people looking to better their community and environment.
### Contributor Motivations:
A potential unintended consequence may be a motivation surrounding the gamification of the process through spark points and hours, but regardless of motivation, contributions are impactful. If people begin to look for ways to maximize their points or hours, they will ultimately create more change for the better.
### Other Considerations and Research
Our largest stakeholders outside contributors are the organizations that post to Spark. By speaking with someone so involved with a non-profit, its funding, and difficulties finding volunteers and donors, we better understood the pain points of potential users on both ends of the platform.
Addressing environmental issues is the responsibility of all people. For one, poor environmental conditions disproportionately harm marginalized individuals. These communities are more likely to be exposed to lead, air pollution, hazardous waste, and extreme temperatures. We have individual responsibilities to improve the state of our environment to minimize this disproportionate impact, and it starts with awareness and individual contributions. Secondly, there is a moral obligation to leave the world livable for future generations. Without intervention and prioritizing of these projects, we don’t follow this ethical duty.
Projects identified by local organizations often occur where the need for better conditions is very visible: beach or park cleanups, invasive species removal, recycling, or food waste minimization, to name a few. They can also work to improve these places through projects such as tree planting or beautification in neglected neighborhoods. Improving living conditions is a social issue that can help decrease the disproportionate impacts faced by those living in areas identified by environmentally focused organizations.
There are potential ethical concerns that we also must consider with the creation of our platform:
1. A bias in which organizations are displayed for prospective contributors. To combat this, we want to incorporate technology that cycles organizations' recommendations to individuals. Especially in larger cities, we wouldn’t want small organizations to lose their ability to gain contributors at the expense of larger organizations on name alone.
2. The risk of greenwashing. Often, organizations looking to gain participation may falsely indicate an interest in ecological progress and then take away attention from organizations focused on improving sustainable practices. This would need to be handled delicately because if a vetting process to allow organizations to add a campaign is added, there may be bias in which types of organizations are filtered out or find the additional steps technologically challenging.
3. Community displacement. Projects that take place in a community may displace the residents of that area. To prevent this, there may be terms and conditions requiring organizations to ensure that their projects meet specific standards that don’t cause issues in the areas they are working with. Largely, though, ecological organizations are very aware of the footprint they leave in places where they work and are careful to be considerate of these communities.
Sources:
<https://www.apha.org/Topics-and-Issues/Environmental-Health/Environmental-Justice>
<https://iep.utm.edu/envi-eth/>
|
# 💡 Inspiration
Meeting new people is an excellent way to broaden your horizons and discover different cuisines. Dining with others is a wonderful opportunity to build connections and form new friendships. In fact, eating alone is one of the primary causes of unhappiness, second only to mental illness and financial problems. Therefore, it is essential to make an effort to find someone to share meals with. By trying new cuisines with new people and exploring new neighbourhoods, you can make new connections while enjoying delicious food.
# ❓ What it does
PlateMate is a unique networking platform that connects individuals in close proximity and provides the setup of an impromptu meeting over some great food! It enables individuals to explore new cuisines and new individuals by using Cohere to process human-written text and discern an individual’s preferences, interests, and other attributes. This data is then aggregated to optimize a matching algorithm that pairs users. Along with a matchmaking feature, PlateMate utilizes Google APIs to highlight nearby restaurant options that fit into users’ budgets. The app’s recommendations consider a user’s budget to help regulate spending habits and make managing finances easier. PlateMate takes into account many factors to ensure that users have an enjoyable and reliable experience on the platform.
# 🚀 Exploration
PlateMate provides opportunities for exploration by expanding social circles with interesting individuals with different life experiences and backgrounds. You are matched to other nearby users with similar cuisine preferences but differing interests. Restaurant suggestions are also provided based on your characteristics and your match’s characteristics. This provides invaluable opportunities to explore new cultures and identities. As the world emerges from years of lockdown and the COVID-19 pandemic, it is more important than ever to find ways to reconnect with others and explore different perspectives.
# 🧰 How we built it
**React, Tailwind CSS, Figma**: The client side of our web app was built using React and styled with Tailwind CSS based on a high-fidelity mockup created on Figma.
**Express.js**: The backend server was made using Express.js and managed routes that allowed our frontend to call third-party APIs and obtain results from Cohere’s generative models.
**Cohere**: User-specific keywords were extracted from brief user bios using Cohere’s generative LLMs. Additionally, after two users were matched, Cohere was used to generate a brief justification of why the two users would be a good match and provide opportunities for exploration.
**Google Maps Platform APIs**: The Google Maps API was used to display a live and dynamic map on the homepage and provide autocomplete search suggestions. The Google Places API obtained lists of nearby restaurants, as well as specific information about restaurants that users were matched to.
**Firebase**: User data for both authentication and matching purposes, such as preferred cuisines and interests, were stored in a Cloud Firestore database.
# 🤔 Challenges we ran into
* Obtaining desired output and formatting from Cohere with longer and more complicated prompts
* Lack of current and updated libraries for the Google Maps API
* Creating functioning Express.js routes that connected to our React client
* Maintaining a cohesive and productive team environment when sleep deprived
# 🏆 Accomplishments that we're proud of
* This was the first hackathon for two of our team members
* Creating a fully-functioning full-stack web app with several new technologies we had never touched before, including Cohere and Google Maps Platform APIs
* Extracting keywords and generating JSON objects with a high degree of accuracy using Cohere
# 🧠 What we learned
* Prompt engineering, keyword extraction, and text generation in Cohere
* Server and route management in Express.js
* Design and UI development with Tailwind CSS
* Dynamic map display and search autocomplete with Google Maps Platform APIs
* UI/UX design in Figma
* REST API calls
# 👉 What's next for PlateMate
* Provide restaurant suggestions that are better tailored to users’ budgets by using Plaid’s financial APIs to accurately determine their average spending
* Connect users directly through an in-app chat function
* Friends and network system
* Improved matching algorithm
|
## Inspiration
**The Tales of Detective Toasty** draws deep inspiration from visual novels like **Persona** and **Ghost Trick** and we wanted to play homage to our childhood games through the fusion of art, music, narrative, and technology. Our goal was to explore the possibilities of AI within game development. We used AI to create detailed character sprites, immersive backgrounds, and engaging slide art. This approach allows players to engage deeply with the game's characters, navigating through dialogues and piecing together clues in a captivating murder mystery that feels personal and expansive. By enriching the narrative in this way, we invite players into Detective Toasty’s charming yet suspense-filled world.
## What It Does
In **The Tales of Detective Toasty**, players step into the shoes of the famous detective Toasty, trapped on a boat with four suspects in a gripping AI-powered murder mystery. The game challenges you to investigate suspects, explore various rooms, and piece together the story through your interactions. Your AI-powered assistant enhances these interactions by providing dynamic dialogue, ensuring that each playthrough is unique. We aim to expand the game with more chapters and further develop inventory systems and crime scene investigations.
## How We Built It
Our project was crafted using **Ren'py**, a Python-based visual novel engine, and Python. We wrote our scripts from scratch, given Ren'py’s niche adoption. Integration of the ChatGPT API allowed us to develop a custom AI assistant that adapts dialogues based on player's question, enhancing the storytelling as it is trained on the world of Detective Toasty. Visual elements were created using Dall-E and refined with Procreate, while Superimpose helped in adding transparent backgrounds to sprites. The auditory landscape was enriched with music and effects sourced from YouTube, and the UI was designed with Canva.
## Challenges We Ran Into
Our main challenge was adjusting the ChatGPT prompts to ensure the AI-generated dialogues fit seamlessly within our narrative, maintaining consistency and advancing the plot effectively. Being our first hackathon, we also faced a steep learning curve with tools like ChatGPT and other OpenAI utilities and learning about the functionalities of Ren'Py and debugging. We struggled with learning character design transitions and refining our artwork, teaching us valuable lessons through trial and error. Furthermore, we had difficulties with character placement, sizing, and overall UI so we had to learn all the components on how to solve this issue and learn an entirely new framework from scratch.
## Accomplishments That We’re Proud Of
Participating in our first hackathon and pushing the boundaries of interactive storytelling has been rewarding. We are proud of our teamwork and the gameplay experience we've managed to create, and we're excited about the future of our game development journey.
## What We Learned
This project sharpened our skills in game development under tight deadlines and understanding of the balance required between storytelling and coding in game design. It also improved our collaborative abilities within a team setting.
## What’s Next for The Tales of Detective Toasty
Looking ahead, we plan to make the gameplay experience better by introducing more complex story arcs, deeper AI interactions, and advanced game mechanics to enhance the unpredictability and engagement of the mystery. Planned features include:
* **Dynamic Inventory System**: An inventory that updates with both scripted and AI-generated evidence.
* **Interactive GPT for Character Dialogues**: Enhancing character interactions with AI support to foster a unique and dynamic player experience.
* **Expanded Storyline**: Introducing additional layers and mysteries to the game to deepen the narrative and player involvement.
* *and more...* :D
|
partial
|
## Inspiration
Every few days, a new video of a belligerent customer refusing to wear a mask goes viral across the internet. On neighborhood platforms such as NextDoor and local Facebook groups, neighbors often recount their sightings of the mask-less minority. When visiting stores today, we must always remain vigilant if we wish to avoid finding ourselves embroiled in a firsthand encounter. With the mask-less on the loose, it’s no wonder that the rest of us have chosen to minimize our time spent outside the sanctuary of our own homes.
For anti-maskers, words on a sign are merely suggestions—for they are special and deserve special treatment. But what can’t even the most special of special folks blow past? Locks.
Locks are cold and indiscriminate, providing access to only those who pass a test. Normally, this test is a password or a key, but what if instead we tested for respect for the rule of law and order? Maskif.ai does this by requiring masks as the token for entry.
## What it does
Maskif.ai allows users to transform old phones into intelligent security cameras. Our app continuously monitors approaching patrons and uses computer vision to detect whether they are wearing masks. When a mask-less person approaches, our system automatically triggers a compatible smart lock.
This system requires no human intervention to function, saving employees and business owners the tedious and at times hopeless task of arguing with an anti-masker.
Maskif.ai provides reassurance to staff and customers alike with the promise that everyone let inside is willing to abide by safety rules. In doing so, we hope to rebuild community trust and encourage consumer activity among those respectful of the rules.
## How we built it
We use Swift to write this iOS application, leveraging AVFoundation to provide recording functionality and Socket.io to deliver data to our backend. Our backend was built using Flask and leveraged Keras to train a mask classifier.
## What's next for Maskif.ai
While members of the public are typically discouraged from calling the police about mask-wearing, businesses are typically able to take action against someone causing a disturbance. As an additional deterrent to these people, Maskif.ai can be improved by providing the ability for staff to call the police.
|
## Inspiration
Recent mass shooting events are indicative of a rising, unfortunate trend in the United States. During a shooting, someone may be killed every 3 seconds on average, while it takes authorities an average of 10 minutes to arrive on a crime scene after a distress call. In addition, cameras and live closed circuit video monitoring are almost ubiquitous now, but are almost always used for post-crime analysis. Why not use them immediately? With the power of Google Cloud and other tools, we can use camera feed to immediately detect weapons real-time, identify a threat, send authorities a pinpointed location, and track the suspect - all in one fell swoop.
## What it does
At its core, our intelligent surveillance system takes in a live video feed and constantly watches for any sign of a gun or weapon. Once detected, the system immediately bounds the weapon, identifies the potential suspect with the weapon, and sends the authorities a snapshot of the scene and precise location information. In parallel, the suspect is matched against a database for any additional information that could be provided to the authorities.
## How we built it
The core of our project is distributed across the Google Cloud framework and AWS Rekognition. A camera (most commonly a CCTV) presents a live feed to a model, which is constantly looking for anything that looks like a gun using GCP's Vision API. Once detected, we bound the gun and nearby people and identify the shooter through a distance calculation. The backend captures all of this information and sends this to check against a cloud-hosted database of people. Then, our frontend pulls from the identified suspect in the database and presents all necessary information to authorities in a concise dashboard which employs the Maps API. As soon as a gun is drawn, the authorities see the location on a map, the gun holder's current scene, and if available, his background and physical characteristics. Then, AWS Rekognition uses face matching to run the threat against a database to present more detail.
## Challenges we ran into
There are some careful nuances to the idea that we had to account for in our project. For one, few models are pre-trained on weapons, so we experimented with training our own model in addition to using the Vision API. Additionally, identifying the weapon holder is a difficult task - sometimes the gun is not necessarily closest to the person holding it. This is offset by the fact that we send a scene snapshot to the authorities, and most gun attacks happen from a distance. Testing is also difficult, considering we do not have access to guns to hold in front of a camera.
## Accomplishments that we're proud of
A clever geometry-based algorithm to predict the person holding the gun. Minimized latency when running several processes at once. Clean integration with a database integrating in real-time.
## What we learned
It's easy to say we're shooting for MVP, but we need to be careful about managing expectations for what features should be part of the MVP and what features are extraneous.
## What's next for HawkCC
As with all machine learning based products, we would train a fresh model on our specific use case. Given the raw amount of CCTV footage out there, this is not a difficult task, but simply a time-consuming one. This would improve accuracy in 2 main respects - cleaner identification of weapons from a slightly top-down view, and better tracking of individuals within the frame. SMS alert integration is another feature that we could easily plug into the surveillance system as well, and further compound the reaction improvement time.
|
## Inspiration
We wanted something like amazon where you could get personalized suggestions for classes. Think the "items you may be interested in bar."
## What it does
You can sign up for the messenger bot on facebook. Once it's live, it will ask you about your classes and how you rate them. Then you can ask for a recommendation and get some classes our machine learning thought you'd be interested in.
## How I built it
We used the python flask framwork to build the front end facebook messenger bot. We connected a database to the program to store the classes and features of classes and to store user's ratings of the class. We used heroku to host both the database and the webserver. On the back end, we used a regression technique to make our class predictions.
## Challenges I ran into
Figuring out how to link a database to our program and how to host that database on heroku was extremely challenging. We also struggled to have the chatbot hold a longer conversation, as the framework means the chatbot by default forgets everything but the current message. Once we designed the framework to work around that, we found our chatbot was sending repetitive messages, though we never figured out why. On the machine learning side, we struggled to determine how accurate our model was from our small dataset of classes.
## Accomplishments that I'm proud of
Our chatbot can actually reply and sometimes is pretty consistent. Also the databases are updated consistently.
## What I learned
We learned a ton about sql and web hosting. The details of web hosting and how we could deploy our code was surprisingly challenging, so it was gratifying to see it work.
## What's next for ClassRate
The machine learning side still needs to be linked to the front end. In addition, we need more data about other classes for more accurate predictions. After that, we'd like to find a way to have users be able to add classes to our system and be able to track enjoyment over the course of a semester for better ratings.
|
winning
|
## Inspiration
Many low-income families and people without permanent homes in Canada are suffering from food shortages. Since 2017, it has been reported that on average, Canada throws away over 2 million tonnes of edible food per year.
Food waste is costly and damaging to the environment, while serving no purpose for those in need. Our goal is to minimize waste by bringing unsold food and supplies from charitable retailers to those in need.
For this hackathon, we wanted to use SMS to communicate with users since statistics Canada reports that almost all individuals with a phone has SMS service while not all of them have access to a stable source of internet connection.
## What it does
Using a webapp, the goal is for suppliers, farmers, restaurant owners, etc. to input their information, location, item they are willing to give, and the timeframe that the item will be available. The app, which uses a database holding users' information, will send an SMS to those within the vicinity letting them know the items they can claim. The users then will have a chance to respond if they will pickup the item, and if not, it will go to someone else, incrementing by distance and how recent they received aid.
## How we built it
This webapp was made with react on the frontend and python on the backend. Twilio was the service used to send SMS from the backend database to users. We used a Firestore database to store user information with their phone numbers as a key. The webapp is takes user and suppliers entered information into the database which will then handle all events and replies.
## Challenges we ran into
Creating a database was a challenge since most of us were new to this. Working with a tight timeframe as well as working remotely provided a great challenge of communication. Formatting the webapp with react css was also a challenge. Due to these constraints, we weren't able to complete a full description video, but we were still able to create a short demo on how it works. We also weren't able to deploy the project to Heroku, but all github repositories are listed.
## Accomplishments that we're proud of
We are excited to be able to connect users in need with charitable suppliers to not only help eliminate food waste, but also improve lives. This webapp offers seamless user experience with everything being handled in the backend. We like the simplicity yet necessity of our design and hope someday it can be put to use all over the world.
## What we learned
This was the first hackathon for some members, and they were exposed to react framework, connecting frontend to backend, using firebase, flask, and other cool technologies not used in a classroom setting.
## What's next for surplus
Since this was a hackathon, we downsized the scope of the project in order to meet the basic goals of this webapp. Firstly, we would deploy this project using a service such as Heroku. Next, to fully implement our vision, we would like to add postal code distance detection to improve on accuracy of neighbourhoods, as well as improve the selection algorithm for deciding which users has priority contact.
|
## Inspiration
One of our first ideas was an Instagram-esque social media site for recipe blogs. We also were interested in working with location data - somewhere along the line there was an idea to make an app that allowed you to track down your friends.
Somehow, we managed to combine both of these wildly different ideas into a real-world applicable site. After researching shelters and food banks (aka googling and clicking on the first result), we realized that while these establishments do have a working relationship, oftentimes the shelters and food banks are required to buy key missing ingredients. Thus, our application was created to further personalize the relationship and interaction between these establishments to aid in decreasing food waste and ensuring people are getting culturally-significant, healthy, and delicious food.
## What it does
Markets and Shelters/food banks log in to their respective homepage. From there, they can see the other establishments near them, as well as an interactive sidebar.
For shelters, they can see nearby participating markets and look at their supply of food.
For markets, they will be able to see nearby shelters and their food requests. They will also be able to change their inventory of available foods for those shelters.
## How we built it
We used next.js and a variety of different style options (css, bootstrap, tailwind.css) to make a "dynamic" website.
## Challenges we ran into
We realized the crux of our application, which relies on a google map api to get nearby markets and their distances, is behind a paywall of $0. We didn't want to enter our credit card info to google. Sorry :/
As well, we were using react-native for a good four hours or so in the beginning, but it wasn't displaying on our localport (it was a blank page). Spent a long time trying to debug. So that was fun.
Our team members also used many different stylesheets. The majority of it was in normal style.css, but we have one component that's entirely in bootstrap (installing it for next.js was a pain). Also, there was an attempt to use tailwind.css for some components.
## Accomplishments that we're proud of
Our UI/UX design, including all our styling, was AMAZING. Shoutout to Lindsay for their major contributions.
As well, this was the first time the majority of our team touched react in their lives, so I think our progress was pretty good. Given that we actually chose to slept on Friday night I'd say we accomplished a lot.
## What we learned
Auth is a pain. Never again. It didn't even work :(
## What's next for crumbz
There's a lot to be implemented. From changing our logo to making sure the authentication actually works there is so much more room for crumbz to grow. If we had more time and commitment this application will become so much more.
|
## Inspiration
Today Instagram has become a huge platform for social activism and encouraging people to contribute to different causes. I've donated to several of these causes but have always been met with a clunky UI that takes several minutes to fully fill out. With the donation inertia already so high, it makes sense to simplify the process and that's exactly what Activst does.
## What it does
It provides social media users to create a personalized profile of what causes they support and different donation goals. Each cause has a description of the rationale behind the movement and details of where the donation money will be spent. Then the user can specify how much they want to donate and finish the process in one click.
## How we built it
ReactJS, Firebase Hosting, Google Pay, Checkbook API, Google Cloud Functions (Python)
## Challenges we ran into
It's very difficult to facilitate payments directly to donation providers and create a one click process to do so as many of the donation providers require specific information from the donor. Using checkbook's API simplified this process as we could simply send a check to the organization's email. CORS.
## What's next for Activst
Add in full payment integration and find a better way to complete the donation process without needing any user engagement. Launch, beta test, iterate, repeat. The goal is to have instagram users have an activst url in their instagram bio.
|
losing
|
## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
|
## Inspiration
The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities
## What it does
The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame
## How I built it
We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well.
## Challenges I ran into
The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene.
## What I learned
I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network.
## What's next for Let Me See
We want to further improve our analysis and reduce our analyzing time.
|
## Inspiration
Inspired by the recent success of apps like HQ, we wanted to capitalize upon the mass demand for instant gratification. After browsing reddit for a few hours, we realized that Snapchat uses selfies, Instagram uses pictures, Facebook uses text, and giphyr uses GIFs.
## What it does
Enabling two types of users, giphyr lets people do what they want. A typical user opens the app, and starts swiping right on GIFs they like, and left on ones they don't. Content creators would upload their creations, and wait as others swipe right and left on their creations. Through daily and weekly quests (such as logging in, swiping on fifteen GIFs, uploading, etc.), users can earn points to spend in the giphyr store. Points can be redeemed for weekly raffles, or products themselves.
## How we built it
Using primarily Android Studio and Java, the app was built using AWS as its backend.
## Challenges I ran into
As usual, Android documentation was lacking. Especially with regard to AWS. Sometimes documentation would reference deprecated methods, or would often contradict itself.
## Accomplishments that I'm proud of
We created neural nets to analyze user preferences and detect potential spam.
## What I learned
Reading AWS Android documentation is like reading a disaster prep guide after the disaster has happened.
## What's next for giphyr
We plan to rewrite the application in React Native to allow for a seamless cross-platform experience. In addition, plans to explore aggressive marketing through referrals and seeking out avenues for funding will be explored in depth.
|
winning
|
## Inspiration
This project was inspired by the healthcare community throughout North America. We wanted to help make the lives of caretakers' and others who have underlying health conditions easier by creating an app that can help remind and track the data of ones medical conditions.
## What it does
Wellness check provides the opportunity to record the date of the medical entry, record the medication being administered to patients. Caretakers, doctors, and patients are also able to write further information on their day and medical dosage calculations.
## How we built it
This project was built with Python, Python GUI, as well as Figma.
## Challenges we ran into
We wanted to build on this project by adding timers and creating more opportunities for users to interact with the app through Python GUI, however we faced obstacles such as our code having errors and being on a time constraint which held us back.
## Accomplishments that we're proud of
We are proud of being able to complete a functioning and visual application which can help users with their day-to-day life.
## What we learned
We learned that our ideas and projects can always be thought further and expanded on with more complex functions and different perspectives from broader groups of people.
## What's next for Wellness Check
We plan on adding new functions, as well as considering other users such as high school teenagers who can also use this app to manage stress and track their health during long school hours.
|
## Inspiration
As second-year engineering students, our own battles with mental, physical and social health in the competitive academic environment led us to delve into the concerning statistics surrounding rising levels of stress and anxiety among individuals.
Fueled by a personal understanding of these struggles, we embarked on creating a health and wellness website.
The goal was to provide a space for users, offering resources and reminders about mental well-being.
Beyond engineering solutions, the project aimed to build a tool which empathetically addressed unique challenges that individuals in our day and age face today.
This website is a proactive response to statistics, in the hopes of transforming them into examples of resilience and fostering hope within the user community.
## What it does
Drawing on studies, we emphasized the importance of mindfulness practices, incorporating evidence-backed techniques such as journaling and deep-breathing exercises.
Research on the positive impact of social support on mental health guided the community-building aspect, highlighting the correlation between strong social connections and resilience. Additionally, the website incorporates information on the benefits of regular physical activity for mental well-being, referring to studies that underscored the positive effects of exercise on mood regulation and stress reduction. By grounding the platform in scientific findings, we aimed to provide users with insights and evidence-based tools to navigate and improve their overall health and well-being.
## How we built it
Developing a mental health website required a delicate balance between scientific accuracy and user-friendly accessibility.
Using languages and technologies including React, Python (using Flask) and SQL, we were able to develop a website to be used by a multitude of individuals with varying technological experience.
## Challenges Faced
We faced the challenge of translating complex research findings into digestible and actionable content that resonated with our broad target audience.
Moreover, maintaining a user-centric approach posed its own set of challenges. Ensuring that the website was not just informative but also genuinely supportive required ongoing refinement and adaptation in such a short amount of time. Navigating the technical aspects of creating an interactive and engaging online platform was also a hurdle as we had to bridge the gap between our technical skills and the human-centered design necessary for an effective mental health resource.
## Accomplishments
As a team we are proud of the backend of our project and its ability to only allow entry from registered users. We are also overall proud of the visual aesthetic of the project including the use of animations.
## Lessons learned
This hackathon and project helped each member on the team progress their knowledge in some shape or form depending on their role. For example, two of our front-end programmers had limited prior experience in React. Due to the nature of the project the front-end programmers had to quickly adapt and use knowledge of JavaScript, CSS, and HTML to guide them through the use of React. The developer working on the backend portion of the project also had to endure a steep learning curve and developed his skills in Python backend development using Flask and SQL, and incorporating authentication.
## What's next for MyMindfulM8
The next step for MyMindfulM8 is the finalization of its web application including the incorporation of infobip in order to send users daily reminders of their weekly goals, thus motivating the user and hindering the risks of procrastination. The finalized web application will also provide goals to the user through a database which will be updated weekly with new goals. This will allow the user to take small steps towards success all year round! Once the web application is finalized we aim to implement MyMindfulM8 as a phone application in order to increase the accessibility of its services.
|
## Inspiration
We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in.
## What it does
You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it.
## How I built it
We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's
## Challenges I ran into
Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow.
## Accomplishments that I'm proud of
The excellent UI design along with the amazing outcomes that can be produced from the translation of slang
## What I learned
A lot of things we learned
## What's next for SlangSlack
We are going to transform the way today's menials keep up with growing trends in slang.
|
losing
|
Know more about KnowFood!
While familiarizing myself with Princeton's Roma Dining Hall, I noticed two things: a lot of food waste and food consumption. So why not aim to solve both?
KnowFood is an application that combines an analysis of Princeton's Dining Hall foods with Amazon Echo's Alexa to provide users information about nutrients and what to eat. The software part of our project involves reading in information about foods in dining halls around Princeton from dining.princeton.edu and determining which foods will satisfy daily nutrient requirements. These nutrient requirements are estimated from the user's age, height, weight, gender, and diet restrictions. To spice up our project, we combined this analysis with Amazon Echo's Alexa. When Alexa is prompted with a question about a meal at a certain dining hall, she responds with the ideal meal to fulfill all your bodily needs. Thus, our project utilizes technology to promote health while decreasing food waste for the environment.
|
## Inspiration
Our inspiration behind creating this app stems from our own experiences as university students. We understand the hectic schedules, academic pressures, and limited time that come with the territory. As students ourselves, we often found it challenging to make healthy and mindful food choices while juggling our studies and other commitments. This inspired us to develop a solution that not only simplifies the process of finding suitable meals in the campus dining hall but also caters to individual dietary restrictions and preferences. We want to empower fellow students to effortlessly make informed food choices that align with their goals, whether it's staying within a certain calorie range, accommodating dietary restrictions, or simply enjoying a satisfying meal. Our aim is to provide a convenient, time-saving, and health-conscious solution for university students, making their dining hall experience more enjoyable and stress-free.
## What it does
Our app is designed to streamline the dining experience for university students by providing a tailored and efficient solution. It serves as a comprehensive food companion, allowing users to search through their university dining hall menus with precision. What sets our app apart is its ability to cater to individual dietary needs and preferences. Users can input specific criteria, such as a calorie limit, and our app will generate a selection of three dishes that meet these requirements. This not only saves valuable time but also helps students make healthier and more mindful food choices. Additionally, the app provides nutritional information, allergen alerts, and even offers recommendations based on past selections, ensuring that every meal is not only delicious but also aligns with the user's unique dietary considerations. Ultimately, our app simplifies the process of finding the perfect meal, making dining on campus a more enjoyable and stress-free experience.
## How we built it
We use html and css for the frontend. We used flask and python for backend. We use webscraping to filter out key information for the menu websites.
## Challenges we ran into
We ran into a lot of challenges. First, we had a completely different idea yesterday and we spent 5 hours trying to implement that idea into a prototype. However, after talking to the mentors, we decided to do a completely different project.
Another challenge that we went through was the connecting the front end and the backend. We had trouble with the post request. The algorithm in the backend was also troubling at some point. Our ideas were all over the place. Communication was also not at its best
## Accomplishments that we're proud of
We had a big brainstorm moment where we just pitch ideas. This improved teamwork and collaboration
## What we learned
There were so much things to learn about coding and and creating projects. Creating projects in a team involves constant communication and collaboration. Explanation of thoughts is very important for teamwork
## What's next for Uni.Eats
We are trying to make the project more advanced where there are more input information and thus more permutations for the amount of input we have. Thus, we can have information besides calories such as a filter for a vegan options, dietary restrictions, and more food facts
|
## Inspiration
We hate making resumes and customizing them for each employeer so we created a tool to speed that up.
## What it does
A user creates "blocks" which are saved. Then they can pick and choose which ones they want to use.
## How we built it
[Node.js](https://nodejs.org/en/)
[Express](https://expressjs.com/)
[Nuxt.js](https://nuxtjs.org/)
[Editor.js](https://editorjs.io/)
[html2pdf.js](https://ekoopmans.github.io/html2pdf.js/)
[mongoose](https://mongoosejs.com/docs/)
[MongoDB](https://www.mongodb.com/)
|
losing
|
## Inspiration
The need for driver monitoring in autonomous vehicle research has greatly improved computer vision and Human Activity Recognition (HAR). We realized that there was huge opportunity for computer vision in another area of life where focus and concentration are the primary concern: work productivity.
## What it does
Tiger Mom uses computer vision to monitor both your screen and your behavior. You leave it on while you study and it will track your screen activity, your physical behavior, and even your ambient surroundings. Its revolutionary approach to sensing allows it to quantitatively learn and suggest actionable insights such as optimal work intervals, exact breakdowns of how time is spent on different distractions, how your productivity responds to the ambient volume/brightness of your surroundings, and can even catch and interrupt you if it notices you dozing off or getting distracted for too long.
## How I built it
Tiger Mom's backend is built entirely with Python, with all computation taking place locally.
The computer vision uses DLib to identify facial landmarks on your face, and then solves the PnP problem to compute the pose of your head (direction your head is facing). It also tracks the aspect of the your eyes to detect how open/closed they are. These two facts are used to detect if you are looking away (implying distraction) or if you are drowsy. OpenCV is used to parse video input from the webcam, process images, and display them with visuals overlaid. Numpy and scipy were used for all mathematical computations/analysis.
Screen-based application tracking is done by parsing the title of your active window and cross-checking against known applications (and in the case of the web browser, different websites too). The software tracks a dictionary of applications mapped to timers to track the total amount of time you spend on each one individually.
Ambient noise and ambient light is derived by applying mathematical transforms on input periodically gathered from the microphone and webcam.
Every 10 seconds, the application tracker sends its values to the front-end in JSON format,
Tiger Mom's front-end is built entirely on React and JavaScript. Graphs were made with CanvasJS.
## Challenges I ran into
For Human Activity Recognition, I originally used a Haar cascade on keras/tensorflow to detect distraction. However, the neural network I found online had been trained on a dataset that I suspect did not include many Asian subjects, so they were not very accurate when detecting my eyes. I thought this was hilarious. This and the fact that Haar cascades also have a tendency to perform more poorly on subjects with darker skin colors led me to pursue another solution which wound up being DLib.
## Accomplishments that I'm proud of
* Running an accurate facial pose estimator with excellent visualizations.
* Demonstrating an original and unique use of computer-vision beyond driver monitoring.
* Developing a tool that genuinely creates value for you, and helps you understand and reduce bad study habits.
## What I learned
Like. Alot.
## What's next for Tiger Mom
The next immediate step that I wanted to touch was key logging! Analyzing words-per-minute would have been an excellent additional data point. And following that I would have loved to incorporate some sentiment analysis into the computer vision to track your mood throughout your study session. One fun idea to combine these two things as suggested by a mentor, Andreas Putz, was to analyze the sound of your typing with the microphone. For software engineers especially, panic and emotion translate very distinctively to the sound of their typing.
But what makes Tiger Mom special (but also a pain) is the sheer breadth of possible insights that can be derived from the data it is capable of sensing. For example, if users were to tag what subjects they were studying, the data could be used to analyze and suggest what sort of work they were passionate/skilled in. Or if location data were to be considered, then Tiger Mom could recommend what are your best places to study at based on ambient noise and light data of previous visits.
These personalized insights could be produced with some clever machine learning on data aggregated over time. Tiger Mom is capable of quantitatively analyzing things like what times of day you specifically are productive, to exact percentages and times. I would have loved to dive into the ML algorithms and set up some learning mechanisms but I did not have enough time to even build a proof of concept.
|
## Inspiration 💡
Do your eyes ever feel strained and dry after hours and hours spent staring at screens? Has your eye doctor ever told you about the 20-20-20 rule? Good thing we’ve automated it for you along with personalized analysis of your eye activity using AdHawk’s eye tracking device.
The awesome AdHawk demos blew us away, and we were inspired by its seemingly subtle, but powerful features: it could track the user's gaze in three dimensions, recognize blink events, and has an external camera. We knew that our goal to remedy this healthcare crisis could be achieved with AdHawk.
## What it does 💻
Poor eye health has become an increasingly important issue in today’s digital world and we want to help. While you’re working at your desktop, you’ll wear the wonderful Adhawk glasses. Every 20 minutes or so, our connected app will alert you to look away for a 20 second eye break. With the eye tracking, you’ll be forced to look at least 20 feet–otherwise, the timer pauses.
We also made an eye exercise game available to play where you move a ball around to hit cubes randomly placed on the screen using your eyes. This engages the eye muscles in a fun and exciting way to improve eye tracking, eye teaming and myopia.
## How we built it 🛠️
Our frontend uses React.js & Styled Components and React Three Fiber for the eye exercise game. Our backend uses Python via AdHawk's SDK with Flask and Firebase for our database.
## Challenges we ran into ⛰️
Setting up the glasses to detect the depth of our sight accurately was difficult as this was the key metric to ensure the user was taking a 20 feet eye break for 20 seconds. As well, connecting this data to the frontend was a bit of a challenge. However, with our Flask and React tech stack, it was an easy, streamlined integration.
As well, we wanted to record analytics of our user’s screen time by taking any instances where their viewing distance was closer than a certain amount. It would give a user a chance to gauge their eye health and better understand their true viewing habits. This was a bit of a challenge as it was our first time using CockroachDB.
## Accomplishments that we're proud of 🏅
As coders and avid tech users, we are proud to have built a functioning app that we would actually use in our lives. Many of us personally struggle with vision problems and Visionary makes it so easy to help reduce these issues, whether it's myopia or eye strain. We’re super proud of the frontend, and the fact that we were able to incorporate the incredible Adhawk glasses into our project successfully.
## What we learned 📚
Start small and dream big. We ensured that the glasses would be able to track viewing distance and send that data to our frontend first before moving on to other features, like a landing page, data analytics, and our database setup.
## What's next for Visionary 🥅
We would love to incorporate other use cases for the Adhawk glasses, including more guided eye exercises with eye tracking, focus tracking by ensuring that the user’s eyes stay on screen, and so much more. Customized settings are also a next step. Visionary would also make for an awesome mobile app so that users can further reduce eye strain on their phones and tablets. The possibilities are truly, truly endless.
|
## Inspiration
We're students, and that means one of our biggest inspirations (and some of our most frustrating problems) come from a daily ritual - lectures.
Some professors are fantastic. But let's face it, many professors could use some constructive criticism when it comes to their presentation skills. Whether it's talking too fast, speaking too *quietly* or simply not paying attention to the real-time concerns of the class, we've all been there.
**Enter LectureBuddy.**
## What it does
Inspired by lackluster lectures and little to no interfacing time with professors, LectureBuddy allows students to signal their instructors with teaching concerns at the spot while also providing feedback to the instructor about the mood and sentiment of the class.
By creating a web-based platform, instructors can create sessions from the familiarity of their smartphone or laptop. Students can then provide live feedback to their instructor by logging in with an appropriate session ID. At the same time, a camera intermittently analyzes the faces of students and provides the instructor with a live average-mood for the class. Students are also given a chat room for the session to discuss material and ask each other questions. At the end of the session, Lexalytics API is used to parse the chat room text and provides the instructor with the average tone of the conversations that took place.
Another important use for LectureBuddy is an alternative to tedious USATs or other instructor evaluation forms. Currently, teacher evaluations are completed at the end of terms and students are frankly no longer interested in providing critiques as any change will not benefit them. LectureBuddy’s live feedback and student interactivity provides the instructor with consistent information. This can allow them to adapt their teaching styles and change topics to better suit the needs of the current class.
## How I built it
LectureBuddy is a web-based application; most of the developing was done in JavaScript, Node.js, HTML/CSS, etc. The Lexalytics Semantria API was used for parsing the chat room data and Microsoft’s Cognitive Services API for emotions was used to gauge the mood of a class. Other smaller JavaScript libraries were also utilised.
## Challenges I ran into
The Lexalytics Semantria API proved to be a challenge to set up. The out-of-the box javascript files came with some errors, and after spending a few hours with mentors troubleshooting, the team finally managed to get the node.js version to work.
## Accomplishments that I'm proud of
Two first-time hackers contributed some awesome work to the project!
## What I learned
"I learned that json is a javascript object notation... I think" - Hazik
"I learned how to work with node.js - I mean I've worked with it before, but I didn't really know what I was doing. Now I sort of know what I'm doing!" - Victoria
"I should probably use bootstrap for things" - Haoda
"I learned how to install mongoDB in a way that almost works" - Haoda
"I learned some stuff about Microsoft" - Edwin
## What's next for Lecture Buddy
* Multiple Sessions
* Further in-depth analytics from an entire semester's worth of lectures
* Pebble / Wearable integration!
@Deloitte See our video pitch!
|
partial
|
## What it does
Blink is a communication tool for those who cannot speak or move, while being significantly more affordable and accurate than current technologies on the market. [The ALS Association](http://www.alsa.org/als-care/augmentative-communication/communication-guide.html) recommends a $10,000 communication device to solve this problem—but Blink costs less than $20 to build.
You communicate using Blink through a modified version of **Morse code**. Blink out letters and characters to spell out words, and in real time from any device, your caretakers can see what you need. No complicated EEG pads or camera setup—just a small, unobtrusive sensor can be placed to read blinks!
The Blink service integrates with [GIPHY](https://giphy.com) for GIF search, [Earth Networks API](https://www.earthnetworks.com) for weather data, and [News API](https://newsapi.org) for news.
## Inspiration
Our inspiration for this project came from [a paper](http://www.wearabletechnologyinsights.com/articles/11443/powering-devices-through-blinking) published on an accurate method of detecting blinks, but it uses complicated, expensive, and less-accurate hardware like cameras—so we made our own **accurate, low-cost blink detector**.
## How we built it
The backend consists of the sensor and a Python server. We used a capacitive touch sensor on a custom 3D-printed mounting arm to detect blinks. This hardware interfaces with an Arduino, which sends the data to a Python/Flask backend, where the blink durations are converted to Morse code and then matched to English characters.
The frontend is written in React with [Next.js](https://github.com/zeit/next.js) and [`styled-components`](https://styled-components.com). In real time, it fetches data from the backend and renders the in-progress character and characters recorded. You can pull up this web app from multiple devices—like an iPad in the patient’s lap, and the caretaker’s phone. The page also displays weather, news, and GIFs for easy access.
**Live demo: [blink.now.sh](https://blink.now.sh)**
## Challenges we ran into
One of the biggest technical challenges building Blink was decoding blink durations into short and long blinks, then Morse code sequences, then standard characters. Without any libraries, we created our own real-time decoding process of Morse code from scratch.
Another challenge was physically mounting the sensor in a way that would be secure but easy to place. We settled on using a hat with our own 3D-printed mounting arm to hold the sensor. We iterated on several designs for the arm and methods for connecting the wires to the sensor (such as aluminum foil).
## Accomplishments that we're proud of
The main point of PennApps is to **build a better future**, and we are proud of the fact that we solved a real-world problem applicable to a lot of people who aren't able to communicate.
## What we learned
Through rapid prototyping, we learned to tackle difficult problems with new ways of thinking. We learned how to efficiently work in a group with limited resources and several moving parts (hardware, a backend server, a frontend website), and were able to get a working prototype ready quickly.
## What's next for Blink
In the future, we want to simplify the physical installation, streamline the hardware, and allow multiple users and login on the website. Instead of using an Arduino and breadboard, we want to create glasses that would provide a less obtrusive mounting method. In essence, we want to perfect the design so it can easily be used anywhere.
Thank you!
|
## Inspiration
Retinal degeneration affects 1 in 3000 people, slowly robbing them of vision over the course of their mid-life. The need to adjust to life without vision, often after decades of relying on it for daily life, presents a unique challenge to individuals facing genetic disease or ocular injury, one which our teammate saw firsthand in his family, and inspired our group to work on a modular, affordable solution. Current technologies which provide similar proximity awareness often cost many thousands of dollars, and require a niche replacement in the user's environment; (shoes with active proximity sensing similar to our system often cost $3-4k for a single pair of shoes). Instead, our group has worked to create a versatile module which can be attached to any shoe, walker, or wheelchair, to provide situational awareness to the thousands of people adjusting to their loss of vision.
## What it does (Higher quality demo on google drive link!: <https://drive.google.com/file/d/1o2mxJXDgxnnhsT8eL4pCnbk_yFVVWiNM/view?usp=share_link> )
The module is constantly pinging its surroundings through a combination of IR and ultrasonic sensors. These are readily visible on the prototype, with the ultrasound device looking forward, and the IR sensor looking to the outward flank. These readings are referenced, alongside measurements from an Inertial Measurement Unit (IMU), to tell when the user is nearing an obstacle. The combination of sensors allows detection of a wide gamut of materials, including those of room walls, furniture, and people. The device is powered by a 7.4v LiPo cell, which displays a charging port on the front of the module. The device has a three hour battery life, but with more compact PCB-based electronics, it could easily be doubled. While the primary use case is envisioned to be clipped onto the top surface of a shoe, the device, roughly the size of a wallet, can be attached to a wide range of mobility devices.
The internal logic uses IMU data to determine when the shoe is on the bottom of a step 'cycle', and touching the ground. The Arduino Nano MCU polls the IMU's gyroscope to check that the shoe's angular speed is close to zero, and that the module is not accelerating significantly. After the MCU has established that the shoe is on the ground, it will then compare ultrasonic and IR proximity sensor readings to see if an obstacle is within a configurable range (in our case, 75cm front, 10cm side).
If the shoe detects an obstacle, it will activate a pager motor which vibrates the wearer's shoe (or other device). The pager motor will continue vibrating until the wearer takes a step which encounters no obstacles, thus acting as a toggle flip-flop.
An RGB LED is added for our debugging of the prototype:
RED - Shoe is moving - In the middle of a step
GREEN - Shoe is at bottom of step and sees an obstacle
BLUE - Shoe is at bottom of step and sees no obstacles
While our group's concept is to package these electronics into a sleek, clip-on plastic case, for now the electronics have simply been folded into a wearable form factor for demonstration.
## How we built it
Our group used an Arduino Nano, batteries, voltage regulators, and proximity sensors from the venue, and supplied our own IMU, kapton tape, and zip ties. (yay zip ties!)
I2C code for basic communication and calibration was taken from a user's guide of the IMU sensor. Code used for logic, sensor polling, and all other functions of the shoe was custom.
All electronics were custom.
Testing was done on the circuits by first assembling the Arduino Microcontroller Unit (MCU) and sensors on a breadboard, powered by laptop. We used this setup to test our code and fine tune our sensors, so that the module would behave how we wanted. We tested and wrote the code for the ultrasonic sensor, the IR sensor, and the gyro separately, before integrating as a system.
Next, we assembled a second breadboard with LiPo cells and a 5v regulator. The two 3.7v cells are wired in series to produce a single 7.4v 2S battery, which is then regulated back down to 5v by an LM7805 regulator chip. One by one, we switched all the MCU/sensor components off of laptop power, and onto our power supply unit. Unfortunately, this took a few tries, and resulted in a lot of debugging.
. After a circuit was finalized, we moved all of the breadboard circuitry to harnessing only, then folded the harnessing and PCB components into a wearable shape for the user.
## Challenges we ran into
The largest challenge we ran into was designing the power supply circuitry, as the combined load of the sensor DAQ package exceeds amp limits on the MCU. This took a few tries (and smoked components) to get right. The rest of the build went fairly smoothly, with the other main pain points being the calibration and stabilization of the IMU readings (this simply necessitated more trials) and the complex folding of the harnessing, which took many hours to arrange into its final shape.
## Accomplishments that we're proud of
We're proud to find a good solution to balance the sensibility of the sensors. We're also proud of integrating all the parts together, supplying them with appropriate power, and assembling the final product as small as possible all in one day.
## What we learned
Power was the largest challenge, both in terms of the electrical engineering, and the product design- ensuring that enough power can be supplied for long enough, while not compromising on the wearability of the product, as it is designed to be a versatile solution for many different shoes. Currently the design has a 3 hour battery life, and is easily rechargeable through a pair of front ports. The challenges with the power system really taught us firsthand how picking the right power source for a product can determine its usability. We were also forced to consider hard questions about our product, such as if there was really a need for such a solution, and what kind of form factor would be needed for a real impact to be made. Likely the biggest thing we learned from our hackathon project was the importance of the end user, and of the impact that engineering decisions have on the daily life of people who use your solution. For example, one of our primary goals was making our solution modular and affordable. Solutions in this space already exist, but their high price and uni-functional design mean that they are unable to have the impact they could. Our modular design hopes to allow for greater flexibility, acting as a more general tool for situational awareness.
## What's next for Smart Shoe Module
Our original idea was to use a combination of miniaturized LiDAR and ultrasound, so our next steps would likely involve the integration of these higher quality sensors, as well as a switch to custom PCBs, allowing for a much more compact sensing package, which could better fit into the sleek, usable clip on design our group envisions.
Additional features might include the use of different vibration modes to signal directional obstacles and paths, and indeed expanding our group's concept of modular assistive devices to other solution types.
We would also look forward to making a more professional demo video
Current example clip of the prototype module taking measurements:(<https://youtube.com/shorts/ECUF5daD5pU?feature=share>)
|
## Inspiration
I wanted to find the next big thing like Bitcoin. I saw it once happen with Binance on the Cryptocurrency Exchange market via a Python Package Repo. Is there a way to systematically discover the next big thing?
It boils down to getting access to asymmetric information. Asymmetric information is a relatively new concept being explored in economic theory. It allows individuals to make more informed decisions because they have access to more information than the other party.
## What it does
This project allows you to get access to asymmetric information. It analyzes thousands of potential data entries from various different data feeds to identify hot topics. It essentially is a custom "trends" finder.
## How I built it
It finds trends by scraping the package registry of Python, Ruby, and JS packages. It also analyzes Arxiv Research Papers. It then creates a custom dashboard for the user to consume the content on these registries.
## Challenges I ran into
Performing keyword extraction proved to be difficult without much knowledge of NLP. Additionally, building a set of scrapers robust to API changes over millions of scraped data was difficult.
## Accomplishments that I'm proud of
1. Coming up with the non-trivial and non-obvious data feeds such as Python Package Registry
2. Discovering Asymmetric information as a theoretical underpinning.
3. Using an auto-generated code framework through a clever repurposing of a framework (Jinja)
4. Generating millions of data entries after only writing a few lines of code for each data source.
## What I learned
1. How building resilient APIs is difficult.
2. Why asymmetric information captures the problem well.
3. How important interfaces and inheritance can be in removing code duplication.
## What's next for Asymmetry: Towards finding the next big thing like Bitcoin
To build the last step. A data analysis technique with ML or NNs for analyzing the raw data. Things include keyword extraction, line charts for trends over time, and word clouds.
|
partial
|
## Overview
We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses.
## Inspiration
Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out!
## What it does
SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with.
## How we built it
The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame.
## Challenges we ran into
Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour.
## Accomplishments that we're proud of
We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency.
## What we learned
We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees)
## What's next for SmartEQ
We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions.
In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy.
|
## Inspiration
So many people around the world, including those dear to us, suffer from mental health issues such as depression. Here in Berkeley, for example, the resources put aside to combat these problems are constrained. Journaling is one method commonly employed to fight mental issues; it evokes mindfulness and provides greater sense of confidence and self-identity.
## What it does
SmartJournal is a place for people to write entries into an online journal. These entries are then routed to and monitored by a therapist, who can see the journals of multiple people under their care. The entries are analyzed via Natural Language Processing and data analytics to give the therapist better information with which they can help their patient, such as an evolving sentiment and scans for problematic language. The therapist in turn monitors these journals with the help of these statistics and can give feedback to their patients.
## How we built it
We built the web application using the Flask web framework, with Firebase acting as our backend. Additionally, we utilized Microsoft Azure for sentiment analysis and Key Phrase Extraction. We linked everything together using HTML, CSS, and Native Javascript.
## Challenges we ran into
We struggled with vectorizing lots of Tweets to figure out key phrases linked with depression, and it was very hard to test as every time we did so we would have to wait another 40 minutes. However, it ended up working out finally in the end!
## Accomplishments that we're proud of
We managed to navigate through Microsoft Azure and implement Firebase correctly. It was really cool building a live application over the course of this hackathon and we are happy that we were able to tie everything together at the end, even if at times it seemed very difficult
## What we learned
We learned a lot about Natural Language Processing, both naively doing analysis and utilizing other resources. Additionally, we gained a lot of web development experience from trial and error.
## What's next for SmartJournal
We aim to provide better analysis on the actual journal entires to further aid the therapist in their treatments, and moreover to potentially actually launch the web application as we feel that it could be really useful for a lot of people in our community.
|
## Inspiration: Person of Interest TV show itself. We thought such an idea could be extended to serve a social and practical purpose in today's cities.
## What it does: It is a Law enforcement assistant that aids police officials by providing them with an alerting mechanism to detect major past offenders in possible high risk areas. For eg: Past sex offenders in a children’s playground. It is a civic hack that uses face detection and recognition on live security feed to enable police to tighten security in an area if required.
Let's say a person X with a history of major criminal offenses enters an area where the occupants might be especially at risk, such as children or senior citizens. In this case, based on severity of the past offenses, the police are alerted of the presence of person X in the locality. The police may choose to take appropriate measures such as tightening the security in the area.
Our algorithm categorizes crimes based on their degree of severity and color codes them accordingly.
## How we built it: We used
1. Python and OpenCV for face detection and recognition.
2. SQL database for the backend
3. Twilio APIs for Text web services (To provide quick and robust alerts to policemen)
## Challenges we ran into
1. Achieving high accuracy for face recognition with very limited data.
2. Sending MMS using Twilio APIs
3. Approaching a sensitive subject
## Accomplishments that we're proud of
1. We built something for a use case we strongly and genuinely believe in.
2. Learnt many concepts of Computer Vision and Machine Learning on the go.
## What we learned
Hackathons are super fun!
## What's next for Person of Interest
It has a lot of scope for advancement -
1. Extend the project to narrow down suspects based on the modus operandi, when there is lack of visual data
2. Can also have use cases outside Law and order, e.g.: to support businesses by alerting the manager when a premium customer enters the venture.
|
winning
|
## Inspiration
This year many things that we once took for granted have become a lot less accessible; for some people more than others. We saw this issue that disproportionately affected immunocompromised individuals and aimed to create a community-wide volunteer based initiative to connect those less amplified voices. When it becomes dangerous for you to enter a grocery store, all you need is a partner willing to grab a shopping cart for you :)
Cartner removes the risk for those that can’t afford it by allowing volunteers that were already on their way to the store pick up food and supplies from your list too! It starts with registering your device and filling out a COVID questionnaire.
## What it does
To add ingredients, the users can either enter it directly through the app, or use our custom chrome extension. If you’re browsing and see a recipe you want to make, just click one button and it automatically adds the ingredients for that recipe to your list. Have most of the ingredients? You can type them individually too. All this information is secured in a Firebase realtime database. Our user authentication process allows you to sign in from any device as long as you have the extension on it.
Once a Cartner sees your list, they can accept it and see all the items. As they go, they can check off what they’ve gotten off your list. Once they checkout and verify the cost, cartner connects to the stripe payments systems and firebase firestore where payment info is easily logged for future use. Users can add a pre-paid amount to cover the bill of items. They can then drop it off at your doorstep, with minimal contact required.
## How we built it
Cartner is built on firebase, the chrome extension is built in javaScript and the app in Java using Android Studio.
## Challenges we ran into
Deployment, Working with firebase
## Accomplishments that we're proud of
The two apps!
## What we learned
Time
## What's next for Cartner
Rev2
|
## Inspiration
Our team’s mission is to create an application that helps everyone keep track of the food products.
Food insecurity is a strategic issue, as people do not live long without food. On the other hand, food safety is a more "civil" topic: it concerns the quality of products intended for human consumption, the nutritional properties of these products, food hygiene, long–term effects on human health, food additives used in production, and similar issues.
The issue is more complex because surplus food production needs to be disposed of, and throwing away food in a world where so many poor and hungry people live. Individuals regularly dispose of old food stored in the kitchen, and governments periodically have to dispose of strategic stocks stored in State food supply systems and renew them.
## What it does
FridgeNote’s main purpose is to keep track of the timely use of food products and grocery items by storing them in Google Cloud Firebase, allowing the user to freely manage his data. The user is able to create his personal account with email and password, starting to input grocery products and their expiration dates after creating or login into his personal account. The program helps you to organize every item in your fridge and safely stores your data in the app using Cloud Firestore.
## How we built it
We build our mobile app with Python, using an open-source Kivy framework for developing our app with the user interface. We managed to transfer user's account information and their inputs into the Google Cloud Firestore as the database making it easier for users to access their information. We used KivyMD for designing our app.
## Challenges we ran into
The whole idea of the project was challenging for our team, but we supported each other and learned from our mistakes.
1. One of the biggest challenges was working with Kivy, a cross-platform Python framework, and KivyMD, especially trying to figure out all the features of that framework.
2. Setting the environment and install all the packages wasn't easy but we succeeded.
3. Integrating the codebase with Firebase was really a tricky part as we never worked with it before.
## Accomplishments that we're proud of
We're very proud of transferring the user's input and store it into a database as none of us have ever worked with one, also we managed to get the user's authentication credentials in this case it was email and password, and later passing these credentials to the Firebase Authentification Database. Additionally, we are proud to integrate everything and make our application work.
## What we learned
We learned everything new about Python by creating this project, starting with generating simple lines of codes in the terminal and ending with understanding the concept of the Kivy framework, KivyMD for designed purposes by making our app more innovative and multi-touch user interface, and also understanding how to connect the Google Cloud Firestore with the application codebase to store the user input for each item and user data.
## What's next for FridgeNotes
Right now we have so many ideas for this project since we all learned everything ourselves, we would like to add:
1. The feature allowing the user to scan expiration dates instead of inputting them
2. Add more design, pages, and pictures to our app
3. Include sound clicking for buttons and music background for the app
|
# Nexus, **Empowering Voices, Creating Connections**.
## Inspiration
The inspiration for our project, Nexus, comes from our experience as individuals with unique interests and challenges. Often, it isn't easy to meet others with these interests or who can relate to our challenges through traditional social media platforms.
With Nexus, people can effortlessly meet and converse with others who share these common interests and challenges, creating a vibrant community of like-minded individuals.
Our aim is to foster meaningful connections and empower our users to explore, engage, and grow together in a space that truly understands and values their uniqueness.
## What it Does
In Nexus, we empower our users to tailor their conversational experience. You have the flexibility to choose how you want to connect with others. Whether you prefer one-on-one interactions for more intimate conversations or want to participate in group discussions, our application Nexus has got you covered.
We allow users to either get matched with a single person, fostering deeper connections, or join one of the many voice chats to speak in a group setting, promoting diverse discussions and the opportunity to engage with a broader community. With Nexus, the power to connect is in your hands, and the choice is yours to make.
## How we built it
We built our application using a multitude of services/frameworks/tool:
* React.js for the core client frontend
* TypeScript for robust typing and abstraction support
* Tailwind for a utility-first CSS framework
* DaisyUI for animations and UI components
* 100ms live for real-time audio communication
* Clerk for a seamless and drop-in OAuth provider
* React-icons for drop-in pixel perfect icons
* Vite for simplified building and fast dev server
* Convex for vector search over our database
* React-router for client-side navigation
* Convex for real-time server and end-to-end type safety
* 100ms for real-time audio infrastructure and client SDK
* MLH for our free .tech domain
## Challenges We Ran Into
* Navigating new services and needing to read **a lot** of documentation -- since this was the first time any of us had used Convex and 100ms, it took a lot of research and heads-down coding to get Nexus working.
* Being **awake** to work as a team -- since this hackathon is both **in-person** and **through the weekend**, we had many sleepless nights to ensure we can successfully produce Nexus.
* Working with **very** poor internet throughout the duration of the hackathon, we estimate it cost us multiple hours of development time.
## Accomplishments that we're proud of
* Finishing our project and getting it working! We were honestly surprised at our progress this weekend and are super proud of our end product Nexus.
* Learning a ton of new technologies we would have never come across without Cal Hacks.
* Being able to code for at times 12-16 hours straight and still be having fun!
* Integrating 100ms well enough to experience bullet-proof audio communication.
## What we learned
* Tools are tools for a reason! Embrace them, learn from them, and utilize them to make your applications better.
* Sometimes, more sleep is better -- as humans, sleep can sometimes be the basis for our mental ability!
* How to work together on a team project with many commits and iterate fast on our moving parts.
## What's next for Nexus
* Make Nexus rooms only open at a cadence, ideally twice each day, formalizing the "meeting" aspect for users.
* Allow users to favorite or persist their favorite matches to possibly re-connect in the future.
* Create more options for users within rooms to interact with not just their own audio and voice but other users as well.
* Establishing a more sophisticated and bullet-proof matchmaking service and algorithm.
## 🚀 Contributors 🚀
| | | | |
| --- | --- | --- | --- |
| [Jeff Huang](https://github.com/solderq35) | [Derek Williams](https://github.com/derek-williams00) | [Tom Nyuma](https://github.com/Nyumat) | [Sankalp Patil](https://github.com/Sankalpsp21) |
|
losing
|
## Inspiration
With its substantial benefits for both patients and physicians, e-prescribing is growing rapidly. In fact, every state in the U.S. now allows e-prescribing and over 70% of U.S. physicians have transmitted at least one prescription electronically. These e-prescriptions can prevent prescription drug errors, speed up the medication process, provide more information to patients, prevent the number of lost prescriptions, and the list goes on. The current issue with e-prescriptions is that the doctor's signature isn't authenticated, making it unreliable, but with **apothecary** we will be able to change that.
## What it does
**apothecary** is a web application that connects patients with doctors to seamlessly allow for the transfer of e-prescriptions. By implementing DocuSign's API, the doctor is able to provide an authenticate and verifiable signature which can reduce the number of forged prescriptions and related drug overdoses. Furthermore, apothecary immediately emails the patient their prescription which makes it easier and faster for them to buy their medication without losing the prescription.
## How we built it
We built **apothecary** by integrating the DocuSign and MongoDB API's into a Node JS web application environment. We used a non-relational database such as MongoDB to store object-like data such as Provider user info and Patient user info to keep a record of relative user data. DocuSign's API was an integral part of our design as it allowed us to use an "envelope" system to send and receive signed documents between different users. Node JS was ultimately our language of choice because it allowed us to use the node package manager to install routing/rest manipulation packages such as express to help us organize all our REST APIs, which tied well with the REST-like API that DocuSign uses.
## Challenges we ran into
Implementing the DocuSign API was a bit challenging due to the authentication that was involved in setting our development environment up with DocuSign. DocuSign makes sure that its API is never compensated (which is a good thing), and as a result their API use requires a double authentication system in which an API key exchanges for an authorization token to allow access to API calls. There were multiple ways to do this authentication on the documentation, but ultimately we had to use a confusing syntax combining different methods of authentication to get access to the DocuSign API calls.
## Accomplishments that we're proud of
The biggest accomplishment we're proud of is being able to use DocuSign's API to create an organized work-system in which signed prescriptions are sent to providers to be signed and then directly to the patient to be downloaded directly for use. The envelope system that was implemented not only lets patients experience a more convenient method to attain prescriptions, but also legitimizes any real prescriptions that a provider signs, hopefully avoiding any future prescription fraud.
## What we learned
The most valuable skill we learned from completing this project is learning how to implement valuable and secure API into our projects so that we do not have to create functions from scratch. Prior to this project, many of us would create simple projects where its uses and purposes would be coded from scratch. However, with the use of public APIs such as DocuSign and MongoDB, we as developers have to focus less on creating complex interactions, and have more time to focus on making our products a more consummate experience.
## What's next for apothecary
Apothecary hopes to implement the FDA API in the future so that patients can find out valuable information about the drugs they are being prescribed to by their providers. By doing so, Apothecary not only brings prescription drugs to e-documentation, but also better educates people on how to practice safe prescription drug use.
|
## Inspiration
To overcome accessibility issues and delay with prescription medication. Specifically, targeting those who are unable to receive their medication and cannot wait to receive their medication through the mail.
## How I built it
Primarily using React and Google Cloud APIs
## Challenges we ran into
We attempted to use MongoDB and Microsoft Azure but we realized that it would be simpler and more efficient to use one platform and minimize the business rule of our project. Thus, we moved to using Google Cloud API for storage as well as OpenCV. Another challenge was learning Javascript for the first time, and implementing it in conjunction with Google Firebase and the Google Maps API. The main problem was that detecting nearby pharmacies and displaying their markers; however, this was eventually determined to be a simple string error.
## Accomplishments that I'm proud of
Overcoming our challenges and creating a final project that can make an impact in the world!
## What I learned
We learned how to use Firebase, React, and how to implement APIs. Also, we learned to fix bugs and use technology we had never experimented with before.
|
## Inspiration
This project was inspired by my love of walking. We all need more outdoor time, but people often feel like walking is pointless unless they have somewhere to go. I have fond memories of spending hours walking around just to play Pokemon Go, so I wanted to create something that would give people a reason to go somewhere new. I envision friends and family sending mystery locations to their loved ones with a secret message, picture, or video that will be revealed when they arrive. You could send them to a historical landmark, a beautiful park, or just like a neat rock you saw somewhere. The possibilities are endless!
## What it does
You want to go out for a walk, but where to? SparkWalk offers users their choice of exciting "mystery walks". Given a secret location, the app tells you which direction to go and roughly how long it will take. When you get close to your destination, the app welcomes you with a message. For now, SparkWalk has just a few preset messages and locations, but the ability for users to add their own and share them with others is coming soon.
## How we built it
SparkWalk was created using Expo for React Native. The map and location functionalities were implemented using the react-native-maps, expo-location, and geolib libraries.
## Challenges we ran into
Styling components for different devices is always tricky! Unfortunately, I didn't have time to ensure the styling works on every device, but it works well on at least one iOS and one Android device that I tested it on.
## Accomplishments that we're proud of
This is my first time using geolocation and integrating a map, so I'm proud that I was able to make it work.
## What we learned
I've learned a lot more about how to work with React Native, especially using state and effect hooks.
## What's next for SparkWalk
Next, I plan to add user authentication and the ability to add friends and send locations to each other. Users will be able to store messages for their friends that are tied to specific locations. I'll add a backend server and a database to host saved locations and messages. I also want to add reward cards for visiting locations that can be saved to the user's profile and reviewed later. Eventually, I'll publish the app so anyone can use it!
|
losing
|
## Inspiration
The inspiration for our project came from three of our members being involved with Smash in their community. From one of us being an avid competitor, one being an avid watcher and one of us who works in an office where Smash is played quite frequently, we agreed that the way Smash Bro games were matched and organized needed to be leveled up. We hope that this becomes a frequently used bot for big and small organizations alike.
## How it Works
We broke the project up into three components, the front end made using React, the back end made using Golang and a middle part connecting the back end to Slack by using StdLib.
## Challenges We Ran Into
A big challenge we ran into was understanding how exactly to create a bot using StdLib. There were many nuances that had to be accounted for. However, we were helped by amazing mentors from StdLib's booth. Our first specific challenge was getting messages to be ephemeral for the user that called the function. Another adversity was getting DM's to work using our custom bot. Finally, we struggled to get the input from the buttons and commands in Slack to the back-end server. However, it was fairly simple to connect the front end to the back end.
## The Future for 'For Glory'
Due to the time constraints and difficulty, we did not get to implement a tournament function. This is a future goal because this would allow workspaces and other organizations that would use a Slack channel to implement a casual tournament that would keep the environment light-hearted, competitive and fun. Our tournament function could also extend to help hold local competitive tournaments within universities. We also want to extend the range of rankings to have different types of rankings in the future. One thing we want to integrate into the future for the front end is to have a more interactive display for matches and tournaments with live updates and useful statistics.
|
## Problem
In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult.
## Solution
To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together.
## About
Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface.
We made it one page to have access to all the tools on one screen and transition between them easier.
We identify this page as a study room where users can collaborate and join with a simple URL.
Everything is Synced between users in real-time.
## Features
Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers.
## Technologies you used for both the front and back end
We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads.
## Challenges we ran into
A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions.
## What's next for Study Buddy
While we were working on this project, we came across several ideas that this could be a part of.
Our next step is to have each page categorized as an individual room where users can visit.
Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic.
Include interface customizing options to allow User’s personalization of their rooms.
Try it live here: <http://35.203.169.42/>
Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down>
Thanks for checking us out!
|
## Inspiration
We wanted to help users save time by quickly skim through massive amounts of unread chats and summarizing the most important points.
## What it does
Identifies and highlights important conversations, assesses sincerity, variety, length, and quality in messenger messages. Users can click important summarized messages and be directed to the message itself. A visualization of the most important words and key themes are also shown so users can quickly conclude what they missed.
## How we built it
We used python for our backend, Javascript, HTML, CSS, and web scraping to integrate our platform.
## Challenges we ran into
Parsing the Facebook DOM in a chrome extension and creating an algorithm to filter for meaningful messages were some challenges we ran into. The most difficult challenge was definitely how we could compare and measure the quality of messages, and pick out what would be relevant for users.
## Accomplishments that we're proud of
The filtration of terrible messages, building an intuitive interface, learning how to navigate Facebook messenger's DOM, and for some of us -- attending our first hackathon!
## What we learned
Summarizing short pieces of fragmented conversations is far more difficult than summarizing large articles, generating a good summary for text data is difficult, and there are many factors in measuring the relevance and comparing messages.
## What's next for Messenger Summarizer
ICO, more ML and NLP components. Add more functionality to the chrome extension for users.
|
winning
|
## Inspiration
This was inspired by Professor Mann's SWIM demonstration. We believe that it can be revolutionary if implemented in 3D. All of us have faced problems imagining in 3D plane while trying to understand complex calculus questions, Fourier series, electromagnetic waves, 3D atomic orbitals, and many other important concepts. Having the means to display 3D data holographically using lights helps students and professors understand and explain concepts that are hard to visualize.
## What it does
The 3D SWIM is capable of mapping any 3-dimensional object in a virtual space holographically with the use of LED Strips. This virtual space is predefined in Unity and extended to an Oculus framework to make the object any shape, form, or quantity desired. By spinning the motor of the SWIM in the z-x plane at 160 RPM (Rotations per minute), a 2-dimensional image can be depicted from the LED strip display. A third dimension is added by manually moving the system forward and backward in the y-axis, thus being able to model the depth. Therefore, with the addition of electromagnetic input, difficult structures such as the quantum mechanical view of the atom can be viewed through the LED's.
## How we built it
The SWIM:
To build the Sequential Wave Imprinting Machine, a recent model of Professor Mann’s kit was used as the frame. Attached to the frame is an LED strip consisting of 30 LEDS linked through a data input. Moreover, a motor driver is used to provide the rotational torque required by the SWIM. Both of these components are connected to the ESP32 Microcontroller which acts as a wireless communication device between the SWIM system and the serial output.
Virtual Environment:
The virtual environment was set up on Unity, which allows an Oculus plugin that configure the headset and VR controllers into the Unity game environment. Thus, the position and orientation data can be used to determine the 3 dimensional coordinate, and 3 dimensional rotation (pitch, roll, and yaw) of the SWIM through this plugin. Once this has been determined, a long-jointed prism was implemented into Unity to simulate the SWIM. To simulate each of the 30 LEDs in the SWIM, 30 corresponding cubes were jointed in the prism. With this jointed prism, we precisely determined if or if not each of the 30 cubes was colliding with a secondary object. Hence if the program detects a collision with a secondary object, it would send a “HIGH” signal to the corresponding LEDs, showing the specific shape and boundary of the object.
Overall system architecture:
To collect positional and rotational data from Unity, each prefab or object made had to be attributed to a C# file. These files are used to command the position of the SWIM prism (in unity), and if this prism detects an object or not. To organize this data for 30 distinct cubes, an “ID” (1 - 30) was set for each cube, and a corresponding boolean value displayed true or false. This set of live data was then communicated to the Arduino serial port in a series of 0’s and 1’s across the LEDs. Finally, we wirelessly communicate the data from the Serial port to the ESP32 attached to the LED strip and turn the LEDs on/off accordingly.
## Challenges we ran into
One challenging aspect of this project was the proficiency required in C# and Unity. From the set-up of the Oculus virtual reality system to constructing a controller-following SWIM prism, we were challenged by an unfamiliar coding environment for all members of our group. Specifically, it took the team many hours, and significant consulting to develop the jointed SWIM prism as adjusting 30 different cubes was an insanely difficult process. Additionally, due to the nicheness of C# communication into serial ports, it took many attempts to get data across the two platforms with low latency. The lack of documentation and inability to debug using the Serial monitor added to the difficulties.
## Accomplishments that we're proud of
The completion of such a complex project in such a short period makes this project as a whole an accomplishment to be proud of. Additionally, being able to transfer a real-world object into a virtual object for simulation and odometry was very rewarding. Since systems like SWIM are not widely known or documented, we were able to get a taste of what engineering research and development is like.
## What we learned
We expanded our understanding of the intricacies of motors as well as the complexity of simulating a virtual environment and pulling raw data from the aforementioned environment. Furthermore, skills in simulation and virtual imaging were developed and we now have a more advanced comprehension of the potential relationships between software and the information that travels between platforms.
## What's next for 3-Dimensional Sequential Wave Imprinting Machine
In the near future, we plan on utilizing denser LED strips to give a higher resolution image, a more stable chassis with built-in sensors for the position of the SWIM, and faster refresh rates and tick speeds to ensure minimal visual lag and delay. Future development of the 3D SWIM will continue by interfacing with the MUSE or MUSE S to be able to be controlled using the mind rather than images within a digital interface.
|
Inspired by those 2D displays based on spinning LEDs that show simple lines of text, we imagined that multiple layers of spinning LEDs could be used to create 3D objects. The result is our volumetric display: a display capable of showing a variety of shapes in 3D space.
The machine consists of 40 LEDs all soldered to a few bread boards with shift registers on them, and all of the shift registers connect to an arduino, thereby allowing the arduino to fully control all of the LEDs. The arduino and the LEDs sit on a rotating platform that we made from laser cut acrylic. The arduino blinks the LEDs in time with the rotation of the platform in order to produce 3D objects.
Over the course of the project we encountered many difficulties including producing literally hundreds of solder joints, attempting to stabilize a platform that would wobble and rip itself apart when spinning, and syncing the arduino as best as we could to the speed of the motor (while the speed of the motor changed due to draining its battery). However, we succeeded in producing the device which can indeed produce 3D objects and have a demonstration program that produces the outlines of a cube.
|
# 🤖🖌️ [VizArt Computer Vision Drawing Platform](https://vizart.tech)
Create and share your artwork with the world using VizArt - a simple yet powerful air drawing platform.

## 💫 Inspiration
>
> "Art is the signature of civilizations." - Beverly Sills
>
>
>
Art is a gateway to creative expression. With [VizArt](https://vizart.tech/create), we are pushing the boundaries of what's possible with computer vision and enabling a new level of artistic expression. ***We envision a world where people can interact with both the physical and digital realms in creative ways.***
We started by pushing the limits of what's possible with customizable deep learning, streaming media, and AR technologies. With VizArt, you can draw in art, interact with the real world digitally, and share your creations with your friends!
>
> "Art is the reflection of life, and life is the reflection of art." - Unknow
>
>
>
Air writing is made possible with hand gestures, such as a pen gesture to draw and an eraser gesture to erase lines. With VizArt, you can turn your ideas into reality by sketching in the air.



Our computer vision algorithm enables you to interact with the world using a color picker gesture and a snipping tool to manipulate real-world objects.

>
> "Art is not what you see, but what you make others see." - Claude Monet
>
>
>
The features I listed above are great! But what's the point of creating something if you can't share it with the world? That's why we've built a platform for you to showcase your art. You'll be able to record and share your drawings with friends.


I hope you will enjoy using VizArt and share it with your friends. Remember: Make good gifts, Make good art.
# ❤️ Use Cases
### Drawing Competition/Game
VizArt can be used to host a fun and interactive drawing competition or game. Players can challenge each other to create the best masterpiece, using the computer vision features such as the color picker and eraser.
### Whiteboard Replacement
VizArt is a great alternative to traditional whiteboards. It can be used in classrooms and offices to present ideas, collaborate with others, and make annotations. Its computer vision features make drawing and erasing easier.
### People with Disabilities
VizArt enables people with disabilities to express their creativity. Its computer vision capabilities facilitate drawing, erasing, and annotating without the need for physical tools or contact.
### Strategy Games
VizArt can be used to create and play strategy games with friends. Players can draw their own boards and pieces, and then use the computer vision features to move them around the board. This allows for a more interactive and engaging experience than traditional board games.
### Remote Collaboration
With VizArt, teams can collaborate remotely and in real-time. The platform is equipped with features such as the color picker, eraser, and snipping tool, making it easy to interact with the environment. It also has a sharing platform where users can record and share their drawings with anyone. This makes VizArt a great tool for remote collaboration and creativity.
# 👋 Gestures Tutorial





# ⚒️ Engineering
Ah, this is where even more fun begins!
## Stack
### Frontend
We designed the frontend with Figma and after a few iterations, we had an initial design to begin working with. The frontend was made with React and Typescript and styled with Sass.
### Backend
We wrote the backend in Flask. To implement uploading videos along with their thumbnails we simply use a filesystem database.
## Computer Vision AI
We use MediaPipe to grab the coordinates of the joints and upload images. WIth the coordinates, we plot with CanvasRenderingContext2D on the canvas, where we use algorithms and vector calculations to determinate the gesture. Then, for image generation, we use the DeepAI open source library.
# Experimentation
We were using generative AI to generate images, however we ran out of time.


# 👨💻 Team (”The Sprint Team”)
@Sheheryar Pavaz
@Anton Otaner
@Jingxiang Mo
@Tommy He
|
losing
|
## 💡 Inspiration 💯
Have you ever faced a trashcan with a seemingly endless number of bins, each one marked with a different type of recycling? Have you ever held some trash in your hand, desperately wondering if it can be recycled? Have you ever been forced to sort your trash in your house, the different bins taking up space and being an eyesore? Inspired by this dilemma, we wanted to create a product that took all of the tedious decision-making out of your hands. Wouldn't it be nice to be able to mindlessly throw your trash in one place, and let AI handle the sorting for you?
## ♻️ What it does 🌱
IntelliBin is an AI trashcan that handles your trash sorting for you! Simply place your trash onto our machine, and watch it be sorted automatically by IntelliBin's servo arm! Furthermore, you can track your stats and learn more about recycling on our React.js website.
## 🛠️ How we built it 💬
Arduino/C++ Portion: We used C++ code on the Arduino to control a servo motor and an LED based on serial input commands. Importing the servo library allows us to access functions that control the motor and turn on the LED colours. We also used the Serial library in Python to take input from the main program and send it to the Arduino. The Arduino then sent binary data to the servo motor, correctly categorizing garbage items.
Website Portion: We used React.js to build the front end of the website, including a profile section with user stats, a leaderboard, a shop to customize the user's avatar, and an information section. MongoDB was used to build the user registration and login process, storing usernames, emails, and passwords.
Google Vision API: In tandem with computer vision, we were able to take the camera input and feed it through the Vision API to interpret what was in front of us. Using this output, we could tell the servo motor which direction to turn based on if it was recyclable or not, helping us sort which bin the object would be pushed into.
## 🚧 Challenges we ran into ⛔
* Connecting the Arduino to the arms
* Determining the optimal way to manipulate the Servo arm, as it could not rotate 360 degrees
* Using global variables on our website
* Configuring MongoDB to store user data
* Figuring out how and when to detect the type of trash on the screen
## 🎉 Accomplishments that we're proud of 🏆
In a short span of 24 hours, we are proud to:
* Successfully engineer and program a servo arm to sort trash into two separate bins
* Connect and program LED lights that change colors varying on recyclable or non-recyclable trash
* Utilize Google Cloud Vision API to identify and detect different types of trash and decide if it is recyclable or not
* Develop an intuitive website with React.js that includes login, user profile, and informative capabilities
* Drink a total of 9 cans of Monsters combined (the cans were recycled)
## 🧠 What we learned 🤓
* How to program in C++
* How to control servo arms at certain degrees with an Arduino
* How to parse and understand Google Cloud Vision API outputs
* How to connect a MongoDB database to create user authentification
* How to use global state variables in Node.js and React.js
* What types of items are recyclable
## 🌳 Importance of Recycling 🍀
* Conserves natural resources by reusing materials
* Requires less energy compared to using virgin materials, decreasing greenhouse gas emissions
* Reduces the amount of waste sent to landfills,
* Decreasesdisruption to ecosystems and habitats
## 👍How Intellibin helps 👌
**Efficient Sorting:** Intellibin utilizes AI technology to efficiently sort recyclables from non-recyclables. This ensures that the right materials go to the appropriate recycling streams.
**Increased Recycling Rates:** With Intellibin making recycling more user-friendly and efficient, it has the potential to increase recycling rates.
**User Convenience:** By automating the sorting process, Intellibin eliminates the need for users to spend time sorting their waste manually. This convenience encourages more people to participate in recycling efforts.
**In summary:** Recycling is crucial for environmental sustainability, and Intellibin contributes by making the recycling process more accessible, convenient, and effective through AI-powered sorting technology.
## 🔮 What's next for Intellibin⏭️
The next steps for Intellibin include refining the current functionalities of our hack, along with exploring new features. First, we wish to expand the trash detection database, improving capabilities to accurately identify various items being tossed out. Next, we want to add more features such as detecting and warning the user of "unrecyclable" objects. For instance, Intellibin could notice whether the cap is still on a recyclable bottle and remind the user to remove the cap. In addition, the sensors could notice when there is still liquid or food in a recyclable item, and send a warning. Lastly, we would like to deploy our website so more users can use Intellibin and track their recycling statistics!
|
## Inspiration
I was walking down the streets of Toronto and noticed how there always seemed to be cigarette butts outside of any building. It felt disgusting for me, especially since they polluted the city so much. After reading a few papers on studies revolving around cigarette butt litter, I noticed that cigarette litter is actually the #1 most littered object in the world and is toxic waste. Here are some quick facts
* About **4.5 trillion** cigarette butts are littered on the ground each year
* 850,500 tons of cigarette butt litter is produced each year. This is about **6 and a half CN towers** worth of litter which is huge! (based on weight)
* In the city of Hamilton, cigarette butt litter can make up to **50%** of all the litter in some years.
* The city of San Fransico spends up to $6 million per year on cleaning up the cigarette butt litter
Thus our team decided that we should develop a cost-effective robot to rid the streets of cigarette butt litter
## What it does
Our robot is a modern-day Wall-E. The main objectives of the robot are to:
1. Safely drive around the sidewalks in the city
2. Detect and locate cigarette butts on the ground
3. Collect and dispose of the cigarette butts
## How we built it
Our basic idea was to build a robot with a camera that could find cigarette butts on the ground and collect those cigarette butts with a roller-mechanism. Below are more in-depth explanations of each part of our robot.
### Software
We needed a method to be able to easily detect cigarette butts on the ground, thus we used computer vision. We made use of this open-source project: [Mask R-CNN for Object Detection and Segmentation](https://github.com/matterport/Mask_RCNN) and [pre-trained weights](https://www.immersivelimit.com/datasets/cigarette-butts). We used a Raspberry Pi and a Pi Camera to take pictures of cigarettes, process the image Tensorflow, and then output coordinates of the location of the cigarette for the robot. The Raspberry Pi would then send these coordinates to an Arduino with UART.
### Hardware
The Arduino controls all the hardware on the robot, including the motors and roller-mechanism. The basic idea of the Arduino code is:
1. Drive a pre-determined path on the sidewalk
2. Wait for the Pi Camera to detect a cigarette
3. Stop the robot and wait for a set of coordinates from the Raspberry Pi to be delivered with UART
4. Travel to the coordinates and retrieve the cigarette butt
5. Repeat
We use sensors such as a gyro and accelerometer to detect the speed and orientation of our robot to know exactly where to travel. The robot uses an ultrasonic sensor to avoid obstacles and make sure that it does not bump into humans or walls.
### Mechanical
We used Solidworks to design the chassis, roller/sweeper-mechanism, and mounts for the camera of the robot. For the robot, we used VEX parts to assemble it. The mount was 3D-printed based on the Solidworks model.
## Challenges we ran into
1. Distance: Working remotely made designing, working together, and transporting supplies challenging. Each group member worked on independent sections and drop-offs were made.
2. Design Decisions: We constantly had to find the most realistic solution based on our budget and the time we had. This meant that we couldn't cover a lot of edge cases, e.g. what happens if the robot gets stolen, what happens if the robot is knocked over ...
3. Shipping Complications: Some desired parts would not have shipped until after the hackathon. Alternative choices were made and we worked around shipping dates
## Accomplishments that we're proud of
We are proud of being able to efficiently organize ourselves and create this robot, even though we worked remotely, We are also proud of being able to create something to contribute to our environment and to help keep our Earth clean.
## What we learned
We learned about machine learning and Mask-RCNN. We never dabbled with machine learning much before so it was awesome being able to play with computer-vision and detect cigarette-butts. We also learned a lot about Arduino and path-planning to get the robot to where we need it to go. On the mechanical side, we learned about different intake systems and 3D modeling.
## What's next for Cigbot
There is still a lot to do for Cigbot. Below are some following examples of parts that could be added:
* Detecting different types of trash: It would be nice to be able to gather not just cigarette butts, but any type of trash such as candy wrappers or plastic bottles, and to also sort them accordingly.
* Various Terrains: Though Cigbot is made for the sidewalk, it may encounter rough terrain, especially in Canada, so we figured it would be good to add some self-stabilizing mechanism at some point
* Anti-theft: Cigbot is currently small and can easily be picked up by anyone. This would be dangerous if we left the robot in the streets since it would easily be damaged or stolen (eg someone could easily rip off and steal our Raspberry Pi). We need to make it larger and more robust.
* Environmental Conditions: Currently, Cigbot is not robust enough to handle more extreme weather conditions such as heavy rain or cold. We need a better encasing to ensure Cigbot can withstand extreme weather.
## Sources
* <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3397372/>
* <https://www.cbc.ca/news/canada/hamilton/cigarette-butts-1.5098782>
* [https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:~:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years](https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:%7E:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years).
|
## Inspiration
There is no such data-set available which can be used to do aspect based analysis of the user reviews. We wanted to make something which helps companies get detailed analysis of reviews with respect to various aspects of their service and help them improve based on it.
## What it does
Our algorithm works in two steps:
1) We create aspect vectors' into 3 dimensional space and given a review as a data point we break it into vector and project it into aspect vector space to find most associated aspect to it. Here we used "customer service","punctuality","cancellation","comfort" and "miscellaneous" as aspects for JetBlue airlines review.
2) After the clustering of aspects we do sentimental analysis of the review and categorize it as either "positive","negative" or "neutral". It can be used to get insights on which aspects are good and which aspects needs improvement.
## How I built it
First we scraped review data from various sources such as Twitter, Instagram, TripAdvisor, Reddit, Airquality.com,etc. In total we collected around 25000 reviews.
Then we used Google's Natural Language API do sentimental analysis of reviews and categorize it as either "positive","negative" or "neutral".
For each review we used cosine similarity between all aspect vectors and review vector and get closest vector to associate the review with that aspect. That way we know where given review is about customer service, comfort, cancellation or punctuality. It is very easy to add new aspects into our application. After that we analyzed sentiment of the review to get information about user's experience with that aspect.
Finally, we build front end using React to display results of our algorithm.
## Challenges I ran into
As there is no readily available data-set about airline reviews, It was difficult to collect such amount of review data-set which can give reasonable performance. So first challenge was gathering data.
To get accurate results we needed aspect vectors which strongly represented the aspects which we wanted to learn. After that we had to experiment with various distance functions between vectors to see which one gave most reasonable results and we settled on cosine similarity function.
Then combining the data from sentimental analysis of reviews using Google's Natural Language API and results of our aspect association algorithm was a bit of a challenge as well as getting a front end dashboard that can visualize the results as we wanted.
## Accomplishments that I'm proud of
Getting highly functional aspect predictions in unsupervised manner.
## What I learned
Thinking through how to implement a data science project end-to-end from data collection to data cleaning to modelling and visualization of results.
## What's next for DeepFly
This project can be easily extended for other kinds of aspects and for reviews of any kind of services.
|
winning
|
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
|
## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
|
## Inspiration
Want to experience culture through an authentic lens rather than through the eyes of a tourist
## What it does
Connects users with a local guide from a country that will take them to the hidden spots that are not filled with tourist traps.
## How we built it
We built it using ReactJS
## Challenges we ran into
We ran into many challenges especially concerning the technical aspect. We were not able to fully develop the website and only have a mockup.
## Accomplishments that we're proud of
Proud to come up with a startup level idea that we could continue after the hackathon.
## What we learned
Learned how to setup a business idea
## What's next for Tracks
Whats next for tracks is to first finish developing the website
|
winning
|
## Inspiration
After witnessing the power of collectible games and card systems, our team was determined to prove that this enjoyable and unique game mechanism wasn't just some niche and could be applied to a social activity game that anyone could enjoy or use to better understand one another (taking a note from Cards Against Humanity's book).
## What it does
Words With Strangers pairs users up with a friend or stranger and gives each user a queue of words that they must make their opponent say without saying this word themselves. The first person to finish their queue wins the game. Players can then purchase collectible new words to build their deck and trade or give words to other friends or users they have given their code to.
## How we built it
Words With Strangers was built on Node.js with core HTML and CSS styling as well as usage of some bootstrap framework functionalities. It is deployed on Heroku and also makes use of TODAQ's TaaS service API to maintain the integrity of transactions as well as the unique rareness and collectibility of words and assets.
## Challenges we ran into
The main area of difficulty was incorporating TODAQ TaaS into our application since it was a new service that none of us had any experience with. In fact, it isn't blockchain, etc, but none of us had ever even touched application purchases before. Furthermore, creating a user-friendly UI that was fully functional with all our target functionalities was also a large issue and challenge that we tackled.
## Accomplishments that we're proud of
Our UI not only has all our desired features, but it also is user-friendly and stylish (comparable with Cards Against Humanity and other genre items), and we were able to add multiple word packages that users can buy and trade/transfer.
## What we learned
Through this project, we learned a great deal about the background of purchase transactions on applications. More importantly, though, we gained knowledge on the importance of what TODAQ does and were able to grasp knowledge on what it truly means to have an asset or application online that is utterly unique and one of a kind; passable without infinite duplicity.
## What's next for Words With Strangers
We would like to enhance the UI for WwS to look even more user friendly and be stylish enough for a successful deployment online and in app stores. We want to continue to program packages for it using TODAQ and use dynamic programming principles moving forward to simplify our process.
|
## Inspiration
Reflecting on 2020, we were challenged with a lot of new experiences, such as online school. Hearing a lot of stories from our friends, as well as our own experiences, doing everything from home can be very distracting. Looking at a computer screen for such a long period of time can be difficult for many as well, and ultimately it's hard to maintain a consistent level of motivation. We wanted to create an application that helped to increase productivity through incentives.
## What it does
Our project is a functional to-do list application that also serves as a 5v5 multiplayer game. Players create a todo list of their own, and each completed task grants "todo points" that they can allocate towards their attributes (physical attack, physical defense, special attack, special defense, speed). However, tasks that are not completed serve as a punishment by reducing todo points.
Once everyone is ready, the team of 5 will be matched up against another team of 5 with a preview of everyone's stats. Clicking "Start Game" will run the stats through our algorithm that will determine a winner based on whichever team does more damage as a whole. While the game is extremely simple, it is effective in that players aren't distracted by the game itself because they would only need to spend a few minutes on the application. Furthermore, a team-based situation also provides incentive as you don't want to be the "slacker".
## How we built it
We used the Django framework, as it is our second time using it and we wanted to gain some additional practice. Therefore, the languages we used were Python for the backend, HTML and CSS for the frontend, as well as some SCSS.
## Challenges we ran into
As we all worked on different parts of the app, it was a challenge linking everything together. We also wanted to add many things to the game, such as additional in-game rewards, but unfortunately didn't have enough time to implement those.
## Accomplishments that we're proud of
As it is only our second hackathon, we're proud that we could create something fully functioning that connects many different parts together. We spent a good amount of time on the UI as well, so we're pretty proud of that. Finally, creating a game is something that was all outside of our comfort zone, so while our game is extremely simple, we're glad to see that it works.
## What we learned
We learned that game design is hard. It's hard to create an algorithm that is truly balanced (there's probably a way to figure out in our game which stat is by far the best to invest in), and we had doubts about how our application would do if we actually released it, if people would be inclined to play it or not.
## What's next for Battle To-Do
Firstly, we would look to create the registration functionality, so that player data can be generated. After that, we would look at improving the overall styling of the application. Finally, we would revisit game design - looking at how to improve the algorithm to make it more balanced, adding in-game rewards for more incentive for players to play, and looking at ways to add complexity. For example, we would look at implementing a feature where tasks that are not completed within a certain time frame leads to a reduction of todo points.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
partial
|
## Inspiration
There's a lot of negativity in news articles nowadays. "News fatigue" is a phenomenon where people become tired of the constant barrage of negative news. We thought of making a **positive only** news app. For some, a positive news app may be a welcome alternative that helps them stay informed without feeling emotionally drained.
## What it does
Proton News is a full-stack application, built completely by us. Here's what it does:
* Queries NewsAPI for articles based on a user's search category
* Uses Cohere to do a sentiment analysis of the articles, and filters out 'negative articles'
* Takes the positive articles, uses Cohere to summarize the articles and stores them on CockroachDB for easy access
* Retrieves the articles from Cockroach DB whenever a user runs the page
## How we built it
The application primarily has two components: a frontend, and a backend.
* The frontend is build on the Next.js framework, with server-side rendering for ultra-fast loads
* The runtime used is Bun, which is a Node.js alternative popular for its low latency and responsiveness
* The backend is primarily built in FastAPI on Python
* The database where the articles are stored is server less Cockroach DB
* Sentiment analysis and summarizing of articles is done using Cohere
## Challenges we ran into
The biggest challenge we had was working with ORM and relational database. We had little experience working with relational databases, so we spent large amount of time learning to query and insert data. The news API that we worked with also returned different types of data (e.g. HTML and string), making text parsing difficult.
## Accomplishments that we're proud of
The project was very ambitious, with plans of integrating a full-fledge SQL database such as CockroachDB, a front-end interface in React as well as a back-end in Python. We, therefore, couldn't believe that MVP was completed within just under 33 hours. It was our blood (nosebleed), sweat, midnight starvation and definitely the all-nighters that made the project much more special. Furthermore, each individual member developed much better skills in SQL database, full-stack developmennt and event graphic designer.
## What we learned
We learned how to work with relational databases (CockroachDB), developing frontend using Next.js, and using third-party services such as Cohere. We also learned to plan our projects thoroughly and research the feasibility of what we wanted to achieve.
## What's next for ProtonNews
ProtonNews still has a lot to offer. We're planning on integrating more of Cohere's features such as rerank and semantic search to allow a wider search range. The database (although functional) has fairly primitive functionalities. As such, better design schema could be contructed to further optimize the queries, allowing the response to be much faster. Finally, deployment is a must-have. The front-end will be quickly delivered with Nextjs, meanwhile, the back-end will be hosted on Google Cloud.
|
## Why
With Black Friday looming and COVID anxiety spiking, impulsive shopping is more tempting than ever before. We made Lychee to combat these impulsive (and costly!) habits and introduce a bit of fun and delight into online shopping.
## What
Lychee is a Google Chrome extension that keeps an eye out for possible impulsive purchases and incentivizes thoughtful shopping with light gamification. You earn points by avoiding impulse and saving money, which you can then use to purchase lychees to feed your virtual pet — save more money -> feed more lychees -> pet grows big and strong -> ??? -> profit!
## Challenges
Our technical challenges were (1) learning Google Chrome’s API due to the many specifics around the Chrome browser and (2) debugging asynchronous programming in Javascript. As a team, we had trouble finding the right rhythm to work together, but communicating more often helped us collaborate and work more quickly.
## Accomplishments
It works! We're proud of our idea and the final product works how we envisioned it. We also all learned a lot and made new friends!
## What's next for Lychee
Currently, Lychee only works on Amazon — we'd love to expand it to more shopping sites. We also want to make its "impulse detection" more robust, reward users for keeping items in their cart for longer, and expand the gamification with cosmetics and other incentives!
|
## Inspiration
Between 1994 and 2013 there were 6,873 natural disasters worldwide, which claimed 1.35 million lives or almost 68,000 lives on average each year. In many of these disasters, people don't know where to go, and where is safe. For example, during Hurricane Harvey, many people learned about dangers such as alligators or flooding through other other people, and through social media.
## What it does
* Post a natural disaster hazard in your area
* Crowd-sourced hazards
* Pulls government severe weather data
* IoT sensor system to take atmospheric measurements and display on map
* Twitter social media feed of trending natural disasters in the area
* Machine learning image processing to analyze posted images of natural disaster hazards
Our app **Eye in the Sky** allows users to post about various natural disaster hazards at their location, upload a photo, and share a description of the hazard. These points are crowd-sourced and displayed on a map, which also pulls live data from the web on natural disasters. It also allows these users to view a Twitter feed of current trending information on the natural disaster in their location. The last feature is the IoT sensor system, which take measurements about the environment in real time and displays them on the map.
## How I built it
We built this app using Android Studio. Our user-created posts are hosted on a Parse database (run with mongodb). We pull data from the National Oceanic and Atmospheric Administration severe weather data inventory. We used Particle Electron to collect atmospheric sensor data, and used AWS to store this data in a JSON.
## Challenges I ran into
We had some issues setting up an AWS instance to run our image processing on the backend, but other than that, decently smooth sailing.
## Accomplishments that I'm proud of
We were able to work really well together, using everyone's strengths as well as combining the pieces into a cohesive app that we're really proud of. Everyone's piece was integrated well into the app, and I think we demonstrated good software collaboration.
## What I learned
We learned how to effectively combine our code across various languages and databases, which is a big challenge in software as a whole. We learned Android Development, how to set up AWS, and how to create databases to store custom data. Most importantly, we learned to have fun!
## What's next for Eye in the Sky
In the future, we would like to add verification of people's self-reported hazards (through machine learning and through up-voting and down-voting)
We would also like to improve the efficiency of our app and reduce reliance on network because there might not be network, or very poor network, in a natural disaster
We would like to add more datasets from online, and to display storms with bigger bubbles, rather than just points
We would also like to attach our sensor to drones or automobiles to gather more weather data points to display on our map
|
losing
|
## Inspiration
One of our team members' grandfathers went blind after slipping and hitting his spinal cord, going from a completely independent individual to reliant on others for everything. The lack of options was upsetting, how could a man who was so independent be so severely limited by a small accident. There is current technology out there for blind individuals to navigate their home, however, there is no such technology that allows blind AND frail individuals to do so. With an increasing aging population, Elderlyf is here to be that technology. We hope to help our team member's grandfather and others like him regain his independence by making a tool that is affordable, efficient, and liberating.
## What it does
Ask your Alexa to take you to a room in the house, and Elderlyf will automatically detect which room you're currently in, mapping out a path from your current room to your target room. With vibration disks strategically located underneath the hand rests, Elderlyf gives you haptic feedback to let you know when objects are in your way and in which direction you should turn. With an intelligent turning system, Elderlyf gently helps with turning corners and avoiding obstacles.
## How I built it
With a Jetson Nano and RealSense Cameras, front view obstacles are detected and a map of the possible routes are generated. SLAM localization was also achieved using those technologies. An Alexa and AWS Speech to Text API was used to activate the mapping and navigation algorithms. By using two servo motors that could independently apply a gentle brake to the wheels to aid users when turning and avoiding obstacles. Piezoelectric vibrating disks were also used to provide haptic feedback in which direction to turn and when obstacles are close.
## Challenges I ran into
Mounting the turning assistance system was a HUGE challenge as the setup needed to be extremely stable. We ended up laser-cutting mounting pieces to fix this problem.
## Accomplishments that we're proud of
We're proud of creating a project that is both software and hardware intensive and yet somehow managing to get it finished up and working.
## What I learned
Learned that the RealSense camera really doesn't like working on the Jetson Nano.
## What's next for Elderlyf
Hoping to incorporate a microphone to the walker so that you can ask Alexa to take you to various rooms even though the Alexa may be out of range.
|
## Inspiration
The inspiration for this project came from the group's passion to build health related apps. While blindness is not necessarily something we can heal, it is something that we can combat with technology.
## What it does
This app gives blind individuals the ability to live life with the same ease as any other person. Using beacon software, we are able to provide users with navigational information in heavily populated areas such as subways or or museums. The app uses a simple UI that includes the usage of different numeric swipes or taps to launch certain features of the app. At the opening of the app, the interface is explained in its entirety in a verbal manner. One of the most useful portions of the app is a camera feature that allows users to snap a picture and instantly receive verbal cues depicting what is in their environment. The navigation side of the app is what we primarily focused on, but as a fail safe method the Lyft API was implemented for users to order a car ride out of a worst case scenario.
## How we built it
## Challenges we ran into
We ran into several challenges during development. One of our challenges was attempting to use the Alexa Voice Services API for Android. We wanted to create a skill to be used within the app; however, there was a lack of documentation at our disposal and minimal time to bring it to fruition. Rather than eliminating this feature all together, we collaborated to develop a fully functional voice command system that can command their application to call for a Lyft to their location through the phone rather than the Alexa.
Another issue we encountered was in dealing with the beacons. In a large area like what would be used in a realistic public space and setting, such as a subway station, the beacons would be placed at far enough distances to be individually recognized. Whereas, in such a confined space, the beacon detection overlapped, causing the user to receive multiple different directions simultaneously. Rather than using physical beacons, we leveraged a second mobile application that allows us to create beacons around us with an Android Device.
## Accomplishments that we're proud of
As always, we are a team of students who strive to learn something new at every hackathon we attend. We chose to build an ambitious series of applications within a short and concentrated time frame, and the fact that we were successful in making our idea come to life is what we are the most proud of. Within our application, we worked around as many obstacles that came our way as possible. When we found out that Amazon Alexa wouldn't be compatible with Android, it served as a minor setback to our plan, but we quickly brainstormed a new idea.
Additionally, we were able to develop a fully functional beacon navigation system with built in voice prompts. We managed to develop a UI that is almost entirely nonvisual, rather used audio as our only interface. Given that our target user is blind, we had a lot of difficulty in developing this kind of UI because while we are adapted to visual cues and the luxury of knowing where to tap buttons on our phone screens, the visually impaired aren't. We had to keep this in mind throughout our entire development process, and so voice recognition and tap sequences became a primary focus. Reaching out of our own comfort zones to develop an app for a unique user was another challenge we successfully overcame.
## What's next for Lantern
With a passion for improving health and creating easier accessibility for those with disabilities, we plan to continue working on this project and building off of it. The first thing we want to recognize is how easily adaptable the beacon system is. In this project we focused on the navigation of subway systems: knowing how many steps down to the platform, when they've reached the safe distance away from the train, and when the train is approaching. This idea could easily be brought to malls, museums, dorm rooms, etc. Anywhere that could provide a concern for the blind could benefit from adapting our beacon system to their location.
The second future project we plan to work on is a smart walking stick that uses sensors and visual recognition to detect and announce what elements are ahead, what could potentially be in the user's way, what their surroundings look like, and provide better feedback to the user to assure they don't get misguided or lose their way.
|
The Book Reading Bot (brb) programmatically flips through physical books, and using TTS reads the pages aloud. There are also options to download the pdf or audiobook.
I read an article on [The Spectator](http://columbiaspectator.com/) how some low-income students cannot afford textbooks, and actually spend time at the library manually scanning the books on their phones. I realized this was a perfect opportunity for technology to help people and eliminate repetitive tasks. All you do is click start on the web app and the software and hardware do the rest!
Another use case is for young children who do not know how to read yet. Using brb, they can read Dr. Seuss alone! As kids nowadays spend too much time on the television, I hope this might lure kids back to children books.
On a high level technical overview, the web app (bootstrap) sends an image to a flask server which uses ocr and tts.
|
partial
|
## Inspiration
The post-COVID era has increased the number of in-person events and need for public speaking. However, more individuals are anxious to publicly articulate their ideas, whether this be through a presentation for a class, a technical workshop, or preparing for their next interview. It is often difficult for audience members to catch the true intent of the presenter, hence key factors including tone of voice, verbal excitement and engagement, and physical body language can make or break the presentation.
A few weeks ago during our first project meeting, we were responsible for leading the meeting and were overwhelmed with anxiety. Despite knowing the content of the presentation and having done projects for a while, we understood the impact that a single below-par presentation could have. To the audience, you may look unprepared and unprofessional, despite knowing the material and simply being nervous. Regardless of their intentions, this can create a bad taste in the audience's mouths.
As a result, we wanted to create a judgment-free platform to help presenters understand how an audience might perceive their presentation. By creating Speech Master, we provide an opportunity for presenters to practice without facing a real audience while receiving real-time feedback.
## Purpose
Speech Master aims to provide a practice platform for practice presentations with real-time feedback that captures details in regard to your body language and verbal expressions. In addition, presenters can invite real audience members to practice where the audience member will be able to provide real-time feedback that the presenter can use to improve.
While presenting, presentations will be recorded and saved for later reference for them to go back and see various feedback from the ML models as well as live audiences. They are presented with a user-friendly dashboard to cleanly organize their presentations and review for upcoming events.
After each practice presentation, the data is aggregated during the recording and process to generate a final report. The final report includes the most common emotions expressed verbally as well as times when the presenter's physical body language could be improved. The timestamps are also saved to show the presenter when the alerts rose and what might have caused such alerts in the first place with the video playback.
## Tech Stack
We built the web application using [Next.js v14](https://nextjs.org), a React-based framework that seamlessly integrates backend and frontend development. We deployed the application on [Vercel](https://vercel.com), the parent company behind Next.js. We designed the website using [Figma](https://www.figma.com/) and later styled it with [TailwindCSS](https://tailwindcss.com) to streamline the styling allowing developers to put styling directly into the markup without the need for extra files. To maintain code formatting and linting via [Prettier](https://prettier.io/) and [EsLint](https://eslint.org/). These tools were run on every commit by pre-commit hooks configured by [Husky](https://typicode.github.io/husky/).
[Hume AI](https://hume.ai) provides the [Speech Prosody](https://hume.ai/products/speech-prosody-model/) model with a streaming API enabled through native WebSockets allowing us to provide emotional analysis in near real-time to a presenter. The analysis would aid the presenter in depicting the various emotions with regard to tune, rhythm, and timbre.
Google and [Tensorflow](https://www.tensorflow.org) provide the [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:%7E:text=MoveNet%20is%20an%20ultra%20fast,17%20keypoints%20of%20a%20body.) model is a large improvement over the prior [PoseNet](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) model which allows for real-time pose detection. MoveNet is an ultra-fast and accurate model capable of depicting 17 body points and getting 30+ FPS on modern devices.
To handle authentication, we used [Next Auth](https://next-auth.js.org) to sign in with Google hooked up to a [Prisma Adapter](https://authjs.dev/reference/adapter/prisma) to interface with [CockroachDB](https://www.cockroachlabs.com), allowing us to maintain user sessions across the web app. [Cloudinary](https://cloudinary.com), an image and video management system, was used to store and retrieve videos. [Socket.io](https://socket.io) was used to interface with Websockets to enable the messaging feature to allow audience members to provide feedback to the presenter while simultaneously streaming video and audio. We utilized various services within Git and Github to host our source code, run continuous integration via [Github Actions](https://github.com/shahdivyank/speechmaster/actions), make [pull requests](https://github.com/shahdivyank/speechmaster/pulls), and keep track of [issues](https://github.com/shahdivyank/speechmaster/issues) and [projects](https://github.com/users/shahdivyank/projects/1).
## Challenges
It was our first time working with Hume AI and a streaming API. We had experience with traditional REST APIs which are used for the Hume AI batch API calls, but the streaming API was more advantageous to provide real-time analysis. Instead of an HTTP client such as Axios, it required creating our own WebSockets client and calling the API endpoint from there. It was also a hurdle to capture and save the correct audio format to be able to call the API while also syncing audio with the webcam input.
We also worked with Tensorflow for the first time, an end-to-end machine learning platform. As a result, we faced many hurdles when trying to set up Tensorflow and get it running in a React environment. Most of the documentation uses Python SDKs or vanilla HTML/CSS/JS which were not possible for us. Attempting to convert the vanilla JS to React proved to be more difficult due to the complexities of execution orders and React's useEffect and useState hooks. Eventually, a working solution was found, however, it can still be improved to better its performance and bring fewer bugs.
We originally wanted to use the Youtube API for video management where users would be able to post and retrieve videos from their personal accounts. Next Auth and YouTube did not originally agree in terms of available scopes and permissions, but once resolved, more issues arose. We were unable to find documentation regarding a Node.js SDK and eventually even reached our quota. As a result, we decided to drop YouTube as it did not provide a feasible solution and found Cloudinary.
## Accomplishments
We are proud of being able to incorporate Machine Learning into our applications for a meaningful purpose. We did not want to reinvent the wheel by creating our own models but rather use the existing and incredibly powerful models to create new solutions. Although we did not hit all the milestones that were hoping to achieve, we are still proud of the application that we were able to make in such a short amount of time and be able to deploy the project as well.
Most notably, we are proud of our Hume AI and Tensorflow integrations that took our application to the next level. Those 2 features took the most time, but they were also the most rewarding as in the end, we got to see real-time updates of our emotional and physical states. We are proud of being able to run the application and get feedback in real-time, which gives small cues to the presenter on what to improve without risking distracting the presenter completely.
## What we learned
Each of the developers learned something valuable as each of us worked with a new technology that we did not know previously. Notably, Prisma and its integration with CockroachDB and its ability to make sessions and general usage simple and user-friendly. Interfacing with CockroachDB barely had problems and was a powerful tool to work with.
We also expanded our knowledge with WebSockets, both native and Socket.io. Our prior experience was more rudimentary, but building upon that knowledge showed us new powers that WebSockets have both when used internally with the application and with external APIs and how they can introduce real-time analysis.
## Future of Speech Master
The first step for Speech Master will be to shrink the codebase. Currently, there is tons of potential for components to be created and reused. Structuring the code to be more strict and robust will ensure that when adding new features the codebase will be readable, deployable, and functional. The next priority will be responsiveness, due to the lack of time many components appear strangely on different devices throwing off the UI and potentially making the application unusable.
Once the current codebase is restructured, then we would be able to focus on optimization primarily on the machine learning models and audio/visual. Currently, there are multiple instances of audio and visual that are being used to show webcam footage, stream footage to other viewers, and sent to HumeAI for analysis. By reducing the number of streams, we should expect to see significant performance improvements with which we can upgrade our audio/visual streaming to use something more appropriate and robust.
In terms of new features, Speech Master would benefit greatly from additional forms of audio analysis such as speed and volume. Different presentations and environments require different talking speeds and volumes of speech required. Given some initial parameters, Speech Master should hopefully be able to reflect on those measures. In addition, having transcriptions that can be analyzed for vocabulary and speech, ensuring that appropriate language is used for a given target audience would drastically improve the way a presenter could prepare for a presentation.
|
## Inspiration ✨
Seeing friends' lives being ruined through **unhealthy** attachment to video games. Struggling with regulating your emotions properly is one of the **biggest** negative effects of video games.
## What it does 🍎
YourHP is a webapp/discord bot designed to improve the mental health of gamers. By using ML and AI, when specific emotion spikes are detected, voice recordings are queued *accordingly*. When the sensor detects anger, calming reassurance is played. When happy, encouragement is given, to keep it up, etc.
The discord bot is an additional fun feature that sends messages with the same intention to improve mental health. It sends advice, motivation, and gifs when commands are sent by users.
## How we built it 🔧
Our entire web app is made using Javascript, CSS, and HTML. For our facial emotion detection, we used a javascript library built using TensorFlow API called FaceApi.js. Emotions are detected by what patterns can be found on the face such as eyebrow direction, mouth shape, and head tilt. We used their probability value to determine the emotional level and played voice lines accordingly.
The timer is a simple build that alerts users when they should take breaks from gaming and sends sound clips when the timer is up. It uses Javascript, CSS, and HTML.
## Challenges we ran into 🚧
Capturing images in JavaScript, making the discord bot, and hosting on GitHub pages, were all challenges we faced. We were constantly thinking of more ideas as we built our original project which led us to face time limitations and was not able to produce some of the more unique features to our webapp. This project was also difficult as we were fairly new to a lot of the tools we used. Before this Hackathon, we didn't know much about tensorflow, domain names, and discord bots.
## Accomplishments that we're proud of 🏆
We're proud to have finished this product to the best of our abilities. We were able to make the most out of our circumstances and adapt to our skills when obstacles were faced. Despite being sleep deprived, we still managed to persevere and almost kept up with our planned schedule.
## What we learned 🧠
We learned many priceless lessons, about new technology, teamwork, and dedication. Most importantly, we learned about failure, and redirection. Throughout the last few days, we were humbled and pushed to our limits. Many of our ideas were ambitious and unsuccessful, allowing us to be redirected into new doors and opening our minds to other possibilities that worked better.
## Future ⏭️
YourHP will continue to develop on our search for a new way to combat mental health caused by video games. Technological improvements to our systems such as speech to text can also greatly raise the efficiency of our product and towards reaching our goals!
|
## Inspiration
I have always been intrigued by the methods in which people overcome disabilities. I keep up with many disabled content creators, and I am constantly inclined to create something whenever they mention an obstacle in their daily lives.
I've been trying to learn ASL for many years, but all I can sign is "My name is Denise." There must be some other way I can communicate with the hard-of-hearing. Although there have been a few developments over the years regarding this topic, there hasn't been any widely available sign language interpreters which is where Diallogue comes in.
## What is Diallogue?
Diallogue is the easiest way to communicate with someone who is deaf. Download the app, hold your phone up to the signing person, and receive a text or speech response, and vice versa.
## How we built it
We used Microsoft's Azure Custom Vision API to train our own model to recognize numerical American Sign Language. With Python and Kivy we created the current user interface and processor.
## Challenges we ran into
This was our first time using cloud services of any kind, so implementing them into our code proved a bit difficult. However, through some troubleshooting and mentors' help, we were able to connect our Custom Vision trained model and the rest of our code.
## Accomplishments that we're proud of
Being able to dive into an unfamiliar framework! You'll never learn anything if you are not uncomfortable.
## What we learned
Keep going. Don't stop, even if your program doesn't work for a whole day, or you can't find any solutions to the error you're facing, or you've just run out of steam. It's always worth it to follow through. Plus, we learned two new frameworks; the Custom Vision API and Kivy!
## What's next for Diallogue
As this app is only a proof of concept, we hope to be able to interpret sign language more accurately and fluidly without needing to capture each sign, as more complex phrases tend to flow from one sign to the next. We would also like to be able to reverse the process to translate spoken English into sign language. Furthermore, we would like to improve the UI design and implementation with Python, as well as expand to different sign languages to support global signing.
|
winning
|
## Inspiration
Machine learning is changing the world on a daily basis- from cars that drive themselves, AI capable of beating humans in our best games, and perhaps most important, predicting and classifying disease (e.g. strokes, seizures, or even cancer). It's possibilities are nothing but promising.
That being said, machine learning is like approaching the face of a rock climbing wall for the first time-- it looks too high and technical, but like climbing, it is entirely intuitive to us. Between multiple libraries in different languages with countless parameters, there exists a discouraging learning curve. We frequently meet people with brilliant ideas to solve real-life problems with the power of machine learning, but little have the technical background to do so. This is why we built ML4dummies.
## What it does
ML4dummies is a responsive, intuitive, and autonomous web-application that allows anyone to upload their own dataset in order to "train" a model for analysis, model fitting, and prediction.
Here are just a few of the **autonomous** features ML4dummies offers:
* Comprehensive analysis of your data
* Interactive and insightful data visualizations
* Hyperparameter selection
* Model configuration file download for prediction and analysis
All ML4dummies asks of our users is to select the features and target on which to fit your machine learning model!
## How we built it
Needless to say with love... our application's back-end, responsible for training the models and computing predictions, runs on Flask, a lightweight framework that allows our machine learning system to communicate (through efficient HTTP requests) with our beautiful front-end, responsible for generating insightful visuals of the responses from the back-end.
## Challenges we ran into
As tech savvy students who only recently came to learn to harness the power of machine learning, we needed to reach a consensus on the easiest and most powerful way one can use machine learning for their unique data sets. The biggest challenge facing the technical side of the puzzle was creating the communication system through which the back-end and front-end could effectively transfer data regarding models and visualization data.
## Accomplishments that we're proud of
In under 36 hours, Team ML4dummies built a general-purpose, back-end server capable of analyzing arbitrary data sets which can be modified via a responsive and easy-to-use webpage. It would be one thing for this to be done by only 4 individuals, but the fact we were putting our trust into who were originally complete strangers in the beginning is really something else.
## What we learned
While thinking of notable use cases, we were astonished to consider how many different domains machine learning could play an overwhelming role. These include just about anything medical or business related, educational psychology, industrial psychology, churn prediction, and march madness brackets, just to name a few. The potential was perhaps what inspired us to pursue ML4dummies. Finally, we learned to work efficiently in a team to communicate our ideas, delegate responsibilities, and eventually integrate our components to complete the project.
## What's next for ML4dummies
After PennApps, we plan to implement features to further simplify machine learning and optimize its effectiveness. In the long run, we plan to refactor the project so that the open-source community can contribute to improve the application. Our hope is that ML4dummies will make machine learning accessible to the masses, and thus enable more problem solvers, doing what they do best- solve problems (duh?), data set by data set.
Here's what's in the works for ML4dummies:
* More ML algorithms
* Autonomous hyperparameter optimization
* Batch predictions
* And much more... or at least that's what I *predict*
|
## Inspiration
We wanted to build a technical app that is actually useful. Scott Forestall's talk at the opening ceremony really spoke to each of us, and we decided then to create something that would not only show off our technological skill but also actually be useful. Going to the doctor is inconvenient and not usually immediate, and a lot of times it ends up being a false alarm. We wanted to remove this inefficiency to make everyone's lives easier and make healthy living more convenient. We did a lot of research on health-related data sets and found a lot of data on different skin diseases. This made it very easy for us to chose to build a model using this data that would allow users to self diagnose skin problems.
## What it does
Our ML model has been trained on hundreds of samples of diseased skin to be able to identify among a wide variety of malignant and benign skin diseases. We have a mobile app that lets you take a picture of a patch of skin that concerns you and runs it through our model and tells you what our model classified your picture as. Finally, the picture also gets sent to a doctor with our model results and allows the doctor to override that decision. This new classification is then rerun through our model to reinforce the correct outputs and penalize wrong outputs, ie. adding a reinforcement learning component to our model as well.
## How we built it
We built the ML model in IBM Watson from public skin disease data from ISIC(International Skin Imaging Collaboration). We have a platform independent mobile app built in React Native using Expo that interacts with our ML Model through IBM Watson's API. Additionally, we store all of our data in Google Firebase's cloud where doctors will have access to them to correct the model's output if needed.
## Challenges we ran into
Watson had a lot of limitations in terms of data loading and training, so it had to be done in extremely small batches, and it prevented us from utilizing all the data we had available. Additionally, all of us were new to React Native, so there was a steep learning curve in implementing our mobile app.
## Accomplishments that we're proud of
Each of us learned a new skill at this hackathon, which is the most important thing for us to take away from any event like this. Additionally, we came in wanting to implement an ML model, and we implemented one that is far more complex than we initially expected by using Watson.
## What we learned
Web frameworks are extremely complex with very similar frameworks being unable to talk to each other. Additionally, while REST APIs are extremely convenient and platform independent, they can be much harder to use than platform-specific SDKs.
## What's next for AEye
Our product is really a proof of concept right now. If possible, we would like to polish both the mobile and web interfaces and come up with a complete product for the general user. Additionally, as more users adopt our platform, our model will get more and more accurate through our reinforcement learning framework.
See a follow-up interview about the project/hackathon here! <https://blog.codingitforward.com/aeye-an-ai-model-to-detect-skin-diseases-252747c09679>
|
## Context on the issue
As a Ukrainian from a very young age I was told narratives, to be taken as truth, about people that didn’t look like me, that weren’t white and “European”, didn’t speak my language, or I just had to assume they didn’t. Such narratives told here and there got reinforced over time during school years, silencing the histories of marginalized groups such as ethnic Roma, Hungarian, Jewish citizens. The way history was presented to me was built up on prior textbooks, driven by sociopolitical forces, which in turn perpetuated racist views and othering, contributing to larger issues of segregation, economic inequalities that persist until today.
Coming to TreeHacks I wanted to challenge the views we have formed through our educational curriculum in history, geography, and related fields. History, as it is, is not a mere collection of facts, and is open to interpretation by historians, though it is often taken to be objective, unchallenged. The problem is, most high school textbooks are backed up by other textbooks from the 17th-19th century formed as a series of the same narratives, yet missing pieces of evidence stored, or hidden all over the country at the time. For instance, there is no mention of the majority of Roma population vanishing in concentration camps of 1942, however there exists environmental racism in cities where they reside, with Roma having the least access to healthcare and education, the highest unemployment rates. While bodily narratives exist, they are by no means supported with data and evidence, and can not be justified.
## What it does
With observational data now available through history museums, digital archives, private-public collections, urban and environmental data, and accessible for all, the solution I present is called - History Now: Public, Applied, Analyzed, an open source tool for analyzing historical observational data, with users being historians, and researchers in the field including students and professors.
History Now equips researchers with statistical tools for causal inference coded in the backend of the website using R programming software, to establish causality of societal changes, to note trends over time, and critically analyze data proposing changes to improve historical curriculum. The methods used within the tool cover implementation of Rubin Causal Model. The tool includes (1) analyzing observational data as randomized experiments through statistical matching using genetic algorithm (or genetic matching); this allows researchers to compare how the implementation of politics, for instance, affected general population and marginalized groups, (2) examining sensitivity to hidden bias (endogeneity) - this allows researchers to test the robustness of their models for causal inference, seeing how certain level of bias can alter the strength of inference, (3) forming synthetic controls - to evaluate the effect of an intervention in historical comparative case studies, (4) examining regression discontinuity, linear regression and classification, quantile regression.
Given the tools available researchers, and other users can explore and analyze data on the website individually and collaboratively with the team of researchers. They can utilize the certified data set from available archives, input it into the cell, choose variables of interests and causal inference methods. They can test hypotheses, historical and historiographical arguments mostly, to form statistical research papers. A feature of peer reviews for study design will be available, for researchers to refine their hypotheses, test results, and inferences.
The final, peer-reviewed, deliverable can direct researchers to submit governmental petitions for changing the historical curriculum materials used at schools. The recent historical data analyzed can effectively be used to drive policy proposals about allocating resources, improving access to basic needs and rights like healthcare, housing, and education for marginalized minority groups in Ukraine.
## How we built it
I made the code for all statistical methods using R Studio; the prototype of the project's website is being created in Figma.
## Challenges we ran into
Absence of Web Development specialist, thus I coded some backend for statistical tools and prototyped it in Figma.
## Accomplishments that we're proud of
Analyzing the problem and picking the most needed statistical methods to the context of historical scholarship.
## What we learned
Context is crucial, and you do have to put yourself into historian's/researcher's shoes to effectively analyze the problem, and the suitability of elements of your solution.
## What's next for History Now: Public, Applied, Analyzed
The best is yet to come. Finding a team, doing more historical and statistical research on the topic. A founder (Andriy, <https://www.linkedin.com/in/andriy-kashyrskyy/>) is actively seeking summer opportunities to get experience in Data Analytics, which will allow him to use some of the knowledge for a side project like this.
|
partial
|
## Inspiration
Taking micro-loans is often a **painfully slow** and **costly process** due to administrative procedure with financial institutions and conversion fees for international loaners.
In addition, many Canadians today don't have a solid understanding about the implications of borrowing loans from banks and other financial institutions. **Concepts like debt and interest accumulation are extremely important** when managing one's finance and **its often too late** for Canadians when they understand how interest can accumulate.
## What it does
*(Not So)* Lone Shark is a peer to peer micro-loaning using bitcoin for both experimenting with interest accumulation and making passive income. Users can request loans of up to $100 CAD and can also loan money to other users, collecting interest compounded hourly after the loan is made. This offers an incentive despite loaning to strangers: the ability to generate passive income with very little risk.
The benefits of *(Not So)* Lone Shark is that its use of a decentralized currency allows for 0 conversion fees (other than a fixed ~ 5 cent transfer fee) and can be done instantly online.
In addition, whenever a user makes a request for a loan, it prompts the user with a timeline of how interest will accumulate per hour to educate the user on how interest accumulation works to gain first hand experiences with loans and time budgeting.
## How we built it
*(Not So)* Lone Shark is built using JavaScript across the stack. React powers the front-end and Meteor/Node.js handles all back-end traffic and calculations. SSL Encryption is also used to ensure that all data is secure, which is important for managing personal finance.
The project is also hosted on Meteor's Galaxy Platform for fast computation speeds and reliability.
## Challenges we ran into
Block-chain technology is quite complex; its transactions and the user’s wallets ultimately remained external to our application, and this compromises the original ideal of completeness that we sought. We had looked into different implementations of Ethereum, but it was of not much help.
## Accomplishments that we're proud of
Within the timespan of 36 hours we were able to create a project that encompasses tackling advanced concepts like block-chain and micro-financing. It was an ambitious project and we take pride in being able to undertake it.
## What we learned
Meteor and React have different methodologies that lead to conflicts within implementation, as a result the implementation of our application was a rough track, and this can be avoided had we utilized BlazeJS over ReactJS.
## What's next for Not So Lone Shark
Implementing blockchain technology into the application itself, whether in the form of creating our very own cryptocurrency or utilizing another platform like ethereum/bitcoin, and completely incorporating that transaction into our program, to allow for better security in transactions.
|
## Average student debt in Canada is $30k. We want to give new grads the cheapest credit possible.
## Two-sided web app that first let's new grads automate all future monthly student loan payments. Then, banks use a dashboard which allows them to offer the new grad a cheaper loan. Once the bank makes an offer, the new grad is alerted and the offer is pushed to the new grad's dashboard in real-time. Winston also visualizes the cost savings that they will receive if they take the bank's loan.
## Used latest and greatest web technologies such as MeteorJS and ReactJS.
## Major pivot that forced us to abandon our initial idea.
## Getting a fully functional product done in less than 15 hours.
## Learned that students can finance their government-issued debt through cheaper bank credit. It's an easy way to save on interest costs. I also learned about product management and accessing bank data.
## We plan on launching the product and onboarding a loan officer at a national bank as a pilot.
|
#### Inspiration
The Division of Sleep Medicine at Harvard Medical School stated that 250,000 drivers fall asleep at the wheel every day in the United States. CDC.gov claims the states has 6,000 fatal per year due to these drowsy drivers. We members of the LifeLine team understand the situation; in no way are we going to be able to stop these commuters from driving home after a long days work. So let us help them keep alert and awake!
#### What is LifeLine?
You're probably thinking "Lifeline", like those calls to dad or mom they give out on "Who wants to be a millionaire?" Or maybe your thinking more literal: "a rope or line used for life-saving". In both cases, you are correct! The wearable LifeLine system connects with an android phone and keeps the user safe and awake on the road through connecting them to a friend.
#### Technologies
Our prototype consists of an Arduino with an accelerometer as part of a headset, monitoring a driver's performance of that well-known head dip of fatigue. This headset communicates with a Go lang server, providing the user's android application with the accelerometer data through an http connection. The android app then processes the x, y tilt data to monitor the driver.
#### What it does
The application user sets an emergency contact upon entry. Then once in "drive" mode, the app displays the x and y tilt of the drivers head, relating it to an animated head that tilts to match the drivers. Upon sensing the first few head nods of the driver, the LifeLine app provides auditory feedback beeps to keep the driver alert. If the condition of the driver does not improve it then sends a text to a pre-entered contact suggesting to them that the user is drowsy driving and that they should reach out to him. If the state of the driver gets worse it then summons the LifeLine and calls their emergency contact.
#### Why call a friend?
Studies find conversation to be a great stimulus of attentiveness. Given that a large number of drivers are alone on the road, the resulting phone connections could save lives.
#### Challenges we ran into
Hardware use is always not fun for software engineers...
#### What's next for LifeLine
* Wireless Capabilities
* Stylish and more comfortable to wear
* Saving data for user review
* GPS feedback for where the driver is when he is dozing off (partly completed already)
**Thanks for reading. Hope to see you on the demo floor!** - Liam
|
partial
|
## Inspiration
We were inspired by the genetic-algorithm-based Super Mario AI known as MarI/O made by SethBling. MarI/O uses genetic algorithms to teach a neural net to beat levels of Super Mario by maximizing an objective function. Inspired by this, we wanted to create a game that maximizes an objective function using genetic algorithms in order to present the player with a challenge.
## What it does
A game designed to take EEG input to monitor and parse brain wave and stress-related data and produce a machine learned environment for the user to interact with. The game generates obstacles in the form of hurdles and walls, where the user can control their speed and position using the real-time data streamed from the Muse headband.
## How we built it
We utilized the Muse API and research tools to help construct a script in Java that connected to the Muse port via TCP.
We then coded in Java to produce our graphics and movement interfaces. We developed a machine learning neural network that constructed each of the game stages, generating obstacle models based off of previous iterations to increase the difficulty level.
## Challenges we ran into
Being able to parse and convert the data feed from the Muse headband into a usable format was definitely a challenge that took us several hours to overcome. In addition, adjusting the parameters for our progressive machine learning to for a non-repetitive, but also feasible set of obstacles was another major challenge.
## Accomplishments that we're proud of
I think just having the game environment that we produced and being able to run through it and interact with it is rewarding on its own. Along with the fact that this game has the potential to relieve stress levels and produce positive user feedback and impact, we all feel tremendously about the game that we have produced.
## What we learned
We learned a lot about using breeding neural networks and different forms of data that can be utilized in novel and unique methods.
## What's next for iamhappy
We definitely want to up our game on the UI and design side. We can allow for more user adjusted parameters and settings, to help fine tune each of the user preferences. In addition, we want to improve our visuals and design of each level and environment. With a more aesthetically appealing background, we can definitely reach a higher mark in our objective of reducing user stress levels.
|
## Inspiration:
The inspiration for this project was finding a way to incentivize healthy activity. While the watch shows people data like steps taken and calories burned, that alone doesn't encourage many people to exercise. By making the app, we hope to make exercise into a game that people look forward to doing rather than something they dread.
## What it does
Zepptchi is an app that allows the user to have their own virtual pet that they can take care of, similar to that of a Tamagotchi. The watch tracks the steps that the user takes and rewards them with points depending on how much they walk. With these points, the user can buy food to nourish their pet which incentivizes exercise. Beyond this, they can earn points to customize the appearance of their pet which further promotes healthy habits.
## How we built it
To build this project, we started by setting up the environment on the Huami OS simulator on a Macbook. This allowed us to test the code on a virtual watch before implementing it on a physical one. We used Visual Studio Code to write all of our code.
## Challenges we ran into
One of the main challenges we faced with this project was setting up the environment to test the watch's capabilities. Out of the 4 of us, only one could successfully install it. This was a huge setback for us since we could only write code on one device. This was worsened by the fact that the internet was unreliable so we couldn't collaborate through other means. One other challenge was
## Accomplishments that we're proud of
Our group was most proud of solving the issue where we couldn't get an image to display on the watch. We had been trying for a couple of hours to no avail but we finally found out that it was due to the size of the image. We are proud of this because fixing it showed that our work hadn't been for naught and we got to see our creation working right in front of us on a mobile device. On top of this, this is the first hackathon any of us ever attended so we are extremely proud of coming together and creating something potentially life-changing in such a short time.
## What we learned
One thing we learned is how to collaborate on projects with other people, especially when we couldn't all code simultaneously. We learned how to communicate with the one who *was* coding by asking questions and making observations to get to the right solution. This was much different than we were used to since school assignments typically only have one person writing code for the entire project. We also became fairly well-acquainted with JavaScript as none of us knew how to use it(at least not that well) coming into the hackathon.
## What's next for Zepptchi
The next step for Zepptchi is to include a variety of animals/creatures for the user to have as pets along with any customization that might go with it. This is crucial for the longevity of the game since people may no longer feel incentivized to exercise once they obtain the complete collection. Additionally, we can include challenges(such as burning x calories in 3 days) that give specific rewards to the user which can stave off the repetitive nature of walking steps, buying items, walking steps, buying items, and so on. With this app, we aim to gamify a person's well-being so that their future can be one of happiness and health.
|
## 💭 Inspiration
Throughout our Zoom university journey, our team noticed that we often forget to unmute our mics when we talk, or forget to mute it when we don't want others to listen in. To combat this problem, we created speakCV, a desktop client that automatically mutes and unmutes your mic for you using computer vision to understand when you are talking.
## 💻 What it does
speakCV automatically unmutes a user when they are about to speak and mutes them when they have not spoken for a while. The user does not have to interact with the mute/unmute button, creating a more natural and fluid experience.
## 🔧 How we built it
The application was written in Python: scipy and dlib for the machine learning, pyvirtualcam to access live Zoom video, and Tkinter for the GUI. OBS was used to provide the program access to a live Zoom call through virtual video, and the webpage for the application was built using Bootstrap.
## ⚙️ Challenges we ran into
A large challenge we ran into was fine tuning the mouth aspect ratio threshold for the model, which determined the model's sensitivity for mouth shape recognition. A low aspect ratio made the application unable to detect when a person started speaking, while a high aspect ratio caused the application to become too sensitive to small movements. We were able to find an acceptable value through trial and error.
Another problem we encountered was lag, as the application was unable to handle both the Tkinter event loop and the mouth shape analysis at the same time. We were able to remove the lag by isolating each process into separate threads.
## ⭐️ Accomplishments that we're proud of
We were proud to solve a problem involving a technology we use frequently in our daily lives. Coming up with a problem and finding a way to solve it was rewarding as well, especially integrating the different machine learning models, virtual video, and application together.
## 🧠 What we learned
* How to setup and use virtual environments in Anaconda to ensure the program can run locally without issues.
* Working with virtual video/audio to access the streams from our own program.
* GUI creation for Python applications with Tkinter.
## ❤️ What's next for speakCV.
* Improve the precision of the shape recognition model, by further adjusting the mouth aspect ratio or by tweaking the contour spots used in the algorithm for determining a user's mouth shape.
* Moving the application to the Zoom app marketplace by making the application with the Zoom SDK, which requires migrating the application to C++.
* Another option is to use the Zoom API and move the application onto the web.
|
partial
|
## Inspiration
We attended an AR workshop and thought it would be interesting to do an AR project with Unity. Initially, we hoped to make an educational tool for Physics classrooms, but later we found the idea hard to implement and changed our idea to making an AR game.
## What it does
LIDAR Marble Due is a 1v1 game where players take turns using tilt control to move the marble from a spawn position to a goal position. But watch out! The game takes into account the terrain of the physical world. So a physical obstacle in real life has a collision size! After each round, players can place various kinds of virtual obstacles (for example, bumpers, spikes, and fans) to increase the difficulty. If the marble touches spikes three times, the player is out. The first player who fails to move the marble to the goal position loses, and the opponent wins.
## How we built it
LIDAR Marble Duel is built with Unity.
## Challenges we ran into
We had difficulty manipulating quaternions while implementing tilt control. Also, we only have one iOS device with LIDAR, which made testing difficult.
## Accomplishments that we're proud of
It's our first time dealing with AR, and we made a functional (and fun!) game!
## What we learned
Never trust Euler angles.
Aside from that, we learned the basics of AR development using Unity and how to deploy a build to an iOS device.
## What's next for LIDAR Marble Duel
We plan to add more features (e.g., more virtual obstacles) and improve the UI of the game before submitting the game to App Store.
|
## Inspiration
We wanted to build a shooter that many friends could play together. We didn't want to settle for something that was just functional, so we added the craziest game mechanic we could think of to maximize the number of problems we would run into: a map that has no up or down, only forward. The aesthetic of the game is based on Minecraft (a game I admit I have never played).
## What it does
The game can host up to 5 players on a local network. Using the keyboard and the mouse on your computer, you can walk around an environment shaped like a giant cube covered in forest, and shoot bolts of energy at your friends. When you reach the threshold of the next plane of the cube, a simple command re-orients your character such that your gravity vector is perpendicular to the next plane, and you can move onwards. The last player standing wins.
## How we built it
First we spent a few (many) hours learning the skills necessary. My teammate familiarized themself with a plethora of Unity functions in order to code the game mechanics we wanted. I'm a pretty decent 3D modeler, but I've never used Maya before and I've never animated a bipedal character. I spent a long while adjusting myself to Maya, and learning how the Mecanim animation system of Unity functions. Once we had the basics, we started working on respective elements: my teammate the gravity transitions and the networking, and myself the character model and animations. Later we combined our work and built up the 3D environment and kept adding features and debugging until the game was playable.
## Challenges we ran into
The gravity transitions where especially challenging. Among a panoply of other bugs that individually took hours to work through or around, the gravity transitions where not fully functional until more than a day into the project. We took a break from work and brainstormed, and we came up with a simpler code structure to make the transition work. We were delighted when we walked all up and around the inside of our cube-map for the first time without our character flailing and falling wildly.
## Accomplishments that we're proud of
Besides the motion capture for the animations and the textures for the model, we built a fully functional, multiplayer shooter with a complex, one-of-a-kind gameplay mechanic. It took 36 hours, and we are proud of going from start to finish without giving up.
## What we learned
Besides the myriad of new skills we picked up, we learned how valuable a hackathon can be. It is an educational experience nothing like a classroom. Nobody chooses what we are going to learn; we choose what we want to learn by chasing what we want to accomplish. By chasing something ambitious, we inevitably run into problems that force us to develop new methodologies and techniques. We realized that a hackathon is special because it's a constant cycle of progress, obstacles, learning, and progress. Progress stacks asymptotically towards a goal until time is up and it's time to show our stuff.
## What's next for Gravity First
The next feature we are dying to add is randomized terrain. We built the environment using prefabricated components that I built in Maya, which we arranged in what we thought was an interesting and challenging arrangement for gameplay. Next, we want every game to have a different, unpredictable six-sided map by randomly laying out the pre-fabs according to certain parameters..
|
Our team and I wanted to create a product that would address an area of social good and education. Specifically, with the 2020 presidential election coming up, we were inspired to create a solution to today's problematic political climate. It is no secret that young voters do not take advantage of their voting rights (especially in critical elections), and we wanted to devise a solution to the recurring dilemma of low voter engagement. And what better way to appeal to the youth than to make an app?
In terms of the UI/UX, our team was inspired by ...Tinder. We wanted to emulate the quality user engagement with the app in our own project to appeal to a younger demographic. In our web app, users can swipe right or left on the agenda of political candidates which they agree or disagree with. In the end, users are "matched" with the candidate that best aligns with their beliefs. However, we ensured that users are not exposed to bias when swiping by eliminating the candidates' names and pictures.
We built this web app using HTML and CSS elements for the front-end development. Additionally, we used JavaScript components for the back-end development. While the basic structure and aesthetic design were created using HTML and CSS, the swiping feature and matching process are all completed using JavaScript.
We faced several challenges throughout the process of creating our project. First, not everyone on our team was familiar with JavaScript, so it was difficult to work through issues we faced with our code. Also, we had trouble with some of the more tedious styling components of our web app. However, despite being pushed out of our comfort zones, we were able to create a viable prototype.
We learned how to collaborate as a team and understand that failure is just another vital part to this process. However, our failures have shown us what we like and dislike about our project so we can build upon it for the future. We intend on making Know Your Vote 2.0 a mobile app in the future for both iOS and Android devices. We even have ideas to incorporate hardware into the product by implementing a joystick to complete the swiping motions.
|
losing
|
## Inspiration
The Amazon Echo is literally awesome -- the Amazon AI called Alexa provides intuitive user experience with voice as the only input. We love working with new technologies and this was a great way to learn more about the capabilities of Alexa.
## What it does
The Golden Quest is a voice-only, interactive Choose Your Own Adventure game. You play as the main character, a hackathon member embarking on a desperate search. Alexa reads out the story and choices to the player. Its intuitiveness and ease of use means anyone can play -- all you need to do is say "Alexa, open the Golden Quest".
## How we built it
We used TypeScript and Amazon's alexa-sdk package for Node.js To represent the branching paths of the story, we created a custom text format and wrote a parser to convert the text into story objects. Our parser was later modified to auto-generate the schema files for Alexa.
## Challenges we ran into
Once we deployed the game on Echo, some of the story lines did not sound natural when spoken by Alexa. We use the Amazon Alexa voice simulator to repeat the lines and edit them. Through user feedback from asking students and mentors to play test, we refined the user experience and game play options. Some improvements based on feedback: wording choices that allowed for more distributed game paths, creating a generalizable character story, expand user input beyond basic gameplay such as "Cancel", "Repeat", "Start Over" etc.
We were unable to find even basic documentation on using the Alexa SDK with Java. Thus, we switched to using Node.js for the game. We then examined Alexa Skill samples with SampleUtterances and IntentSchema
## Accomplishments that we're proud of
None of us have ever worked with Amazon Echo, the Alexa AI, or Amazon Web Services (Lambda) prior to this hackathon. To get a game up and running on hardware in less than 24 hours has been an amazing learning experience for us. From using feedback from the beta testers to refine, we were able to go beyond a minimum viable product and create a polished, debugged
## What we learned
We learned how to create, test and deploy an Alexa Skill with no prior experience. We learned how to work with the Alexa SDK, AWS lambda functions, and Alexa Developer to create a game. We gained experience with functional programming modelling, TypeScript and Node.js.
## What's next for The Golden Quest
The Golden Quest is a self-contained game, but we believe in the potential to be much greater. Alexa is an AI that learns. Our game creates a personal experience for the user, with voice as the only input. However, with the possibilities of machine learning, the game can become unique to the user experience. Sentiment-based player choices informs the AI of certain game paths and customizes the game with existing information.
The Golden Quest can also be expanded to contain multiple storylines with educational opportunities. It can be simplified for younger players or made more complex for language learning and retention.
|
## Inspiration
We tried to figure out what kept us connected during the pandemic other then the non ending zoom meetings or the occasional time you spend in class together, and fundamentally this all came down to our ability to just speak and once we started thinking about it we couldn't stop
## What it does
We created a web app that displays a sentence that the user can read and using assembly ai’s real time word detection API we stream what the user is reading, while providing feedback on their correctness. Using a Profanity free comprehensive dictionary, we randomized which words are shown to the user to help make each sentence challenging in a different way.
## How we built it
In our design process, we started with the idea. After coming up with our idea we started our research to find the best way to implement the features we wanted to use and after realizing we had access to assembly ai, we knew it was a match made in heaven. Afterwards, we started designing basic functionalities and creating flowcharts to identify possible points of difficulty. After our design process, we started developing our project using html, css, NodeJS, JQuery and assembly ai.
## Challenges we ran into
We initially hoped to use python as our main language, however, learning Django while also finding ways to provide accurate feedback proved to be too difficult within the time frame which further lead us into building our project in JS. Furthermore, learning Node.js and assembly ai was also significantly difficult considering the time frame.
## Accomplishments that we're proud of
Having ran into countless problems with Django Web Kit and Python in the beginning, we decided to switch to a JavaScript base. Now, with only half the remaining time left, we were forced to be creative and work diligently to finish before the deadline. Ultimately, the end product was better than we could have hoped for, and incorporated many completely new concepts to us. It was this ability to problem solve and learn quickly that we are both very proud of ourselves for.
## What we learned
Along the way to finishing our project, some of (far from all) the things we learnt about were: web device interfaces for recording audio, networking and websockets to help communicate with external APIs, audio streams with machine learning, running javascript as a backend, using NodeJS modules, hosting client and server side platforms, and in general, user experience optimization as a whole.
## What's next for TSPeach
We hoped but were unable to include was the increasing of user feedback based on their pronunciations. We initially wanted to analyze and compare each user's pronunciation to a text-to-speech engine, however it was too hard to do in the time frame, so this would be another feature we would love to add. Optimizing our interface with Assembly AI would be our next major goal. Currently, the asynchronous approach to handling responses from Assembly AI uses a single async thread, however having multiple collaborating would be the ultimate goal.
|
## Inspiration
Over this past semester, Alp and I were in the same data science class together, and we were really interested in how data can be applied through various statistical methods. Wanting to utilize this knowledge in a real-world application, we decided to create a prediction model using machine learning. This would allow us to apply the concepts that we learned in class, as well as to learn more about various algorithms and methods that are used to create better and more accurate predictions.
## What it does
This project consists of taking a dataset containing over 280,000 real-life credit card transactions made by European cardholders over a two-day period in September 2013, with a variable determining whether the transaction was fraudulent, also known as the ground truth. After conducting exploratory data analysis, we separated the dataset into training and testing data, before training the classification algorithms on the training data. After that, we observed how accurately each algorithm performed on the testing data to determine the best-performing algorithm.
## How we built it
We built it in Python using Jupyter notebooks, where we imported all our necessary libraries for plotting, visualizing and modeling the dataset. From there, we began to do some explanatory data analysis to figure out the imbalances of the data and the different variables. However, we discovered that there were several variables that were unknown to us due to customer confidentiality. From there, we first applied principal component analysis (PCA) to reduce the dimensionality of the dataset by removing the unknown variables and analyzing the data using the only two variables that were known to us, the amount and time of each transaction. Thereafter, we had to balance the dataset using the SMOTE technique in order to balance the dataset outcomes, as the majority of the data was determined to be not fraudulent. However, in order to detect fraud, we had to ensure that the training had an equal proportion of data values that were both fraudulent and not fraudulent in order to return accurate predictions. After that, we applied 6 different classification algorithms to the training data to train it to predict the respective outcomes, such as Naive Bayes, Decision Tree, Random Forest, K-Nearest Neighbor, Logistic Regression and XGBoost. After training the data, we then applied these algorithms to the testing data and observed how accurately does each algorithm predict fraudulent transactions. We then cross-validated each algorithm by applying it to every subset of the dataset in order to reduce overfitting. Finally, we used various evaluation metrics such as accuracy, precision, recall and F-1 scores to compare which algorithm performed the best in accurately predicting fraudulent transactions.
## Challenges we ran into
The biggest challenge was the sheer amount of research and trial and error required to build this model. As this was our first time building a prediction model, we had to do a lot of reading to understand the various steps and concepts needed to clean and explore the dataset, as well as the theory and mathematical concepts behind the classification algorithms in order to model the data and check for accuracy.
## Accomplishments that we're proud of
We are very proud that we are able to create a working model that is able to predict fraudulent transactions with very high accuracy, especially since this was our first major ML model that we have made.
## What we learned
We learned a lot about the processing of building a machine learning application, such as cleaning data, conducting explanatory data analysis, creating a balanced sample, and modeling the dataset using various classification strategies to find the model with the highest accuracy.
## What's next for Credit Card Fraud Detection
We want to do more research into the theory and concepts behind the modeling process, especially the classification strategies, as we work towards fine-tuning this model and building more machine learning projects in the future.
|
losing
|
## **CoLab** makes exercise fun.
In August 2020, **53%** of US adults reported that their mental health has been negatively impacted due to worry and stress over coronavirus. This is **significantly higher** than the 32% reported in March 2020.
That being said, there is no doubt that Coronavirus has heavily impacted our everyday lives. Quarantine has us stuck inside, unable to workout at our gyms, practice with our teams, and socialize in classes.
Doctor’s have suggested we exercise throughout lockdown, to maintain our health and for the release of endorphins.
But it can be **hard to stay motivated**, especially when we’re stuck inside and don’t know the next time we can see our friends.
Our inspiration comes from this, and we plan to solve these problems with **CoLab.**
## What it does
CoLab enables you to workout with others, following a synced YouTube video or creating a custom workout plan that can be fully dynamic and customizable.
## How we built it
Our technologies include: Twilio Programmable Video API, Node.JS and React.
## Challenges we ran into
At first, we found it difficult to resize the Video References for local and remote participants. Luckily, we were able to resize and set the correct ratios using Flexbox and Bootstrap's grid system.
We also needed to find a way to mute audio and disable video as these are core functionalities in any video sharing applications. We were luckily enough to find that someone else had the same issue on [stack overflow](https://stackoverflow.com/questions/41128817/twilio-video-mute-participant) which we were able to use to help build our solution.
## Accomplishments that we're proud of
When the hackathon began, our team started brainstorming a ton of goals like real-time video, customizable workouts, etc. It was really inspiring and motivating to see us tackle these problems and accomplish most of our planned goals one by one.
## What we learned
This sounds cliché but we learned how important it was to have a strong chemistry within our team. One of the many reasons why I believe our team was able to complete most of our goals was because we were all very communicative, helpful and efficient. We knew that we joined together to have a good time but we also joined because we wanted to develop our skills as developers. It helped us grow as individuals and we are now more competent in using new technologies like Twilios Programmable API!
## What's next for CoLab
Our team will continue developing the CoLab platform and polishing it until we deem it acceptable for publishing. We really believe in the idea of CoLab and want to pursue the idea further. We hope you share that vision and our team would like to thank you for reading this verbose project story!
|
## Inspiration
With the increasing use of **digital** communication being integrated into our day to day lives, there comes an ergonomic risk from factors such as poor posture, eye strain, and poor physical health.
As our group was bouncing one idea after another off each other in a video call, we realized how much the pandemic has impacted the world on a digital level. In fact, throughout the hackathon, many of us are guilty of taking little to no breaks, grinding out our vision to every detail. Additionally, prolonged exposure to digital devices may lead to burnout in video software applications and has been one of the negative environmental factors many of us had to adapt to in light of COVID-19. Our team set out to come up with an innovative solution that prompts users to take breaks while using video software; solutions that involve more than just willpower. Ultimately, this led to the creation of *Flock*.
## What it does
Flock is a real-world implementation of a video platform inspired by the Pomodoro technique that is used for more effective studying and work habits. We have programmed the web application to accommodate each group’s preference for both the work and break duration. For instance, if you set work to 25 minutes and break to 5 minutes, every 25 minutes of work would be met by a 5-minute break prompted by the video platform to do other activities that could involve meditation, exercise, and fun! By setting a mandatory break in relation to the time you work, you are balancing your psychological and physical wellbeing with online work and studying.
## How we built it
Flock was built primarily with React.js as frontend, and Node/ Express as back-end. Video streaming was achieved using Twilio Programmed Video API, with Firebase handling realtime status and emoji updates.
## Challenges we ran into
In the beginning, there were a lot of errors with the npm and getting the files to run through the terminal git. It was quite difficult to set up a Chrome extension due to the involvement of multiple languages and having the need to constantly update it manually every time the code is changed. The presence of background and content scripts also adds a layer of complexity as some functions cannot be executed in certain scripts. Overall, it was a fun journey, and we hope to further develop Flock's features following the hackathon.
|
## Inspiration
As more and more blockchains transition to using Proof of Stake as their primary consensus mechanism, the importance of validators becomes more apparent. The security of entire digital economies, people's assets, and global currencies rely on the security of the chain, which at its core is guaranteed by the number of tokens that are staked by validators. These staked tokens not only come from validators but also from everyday users of the network. In the current system there is very little distinguishing between validators other than the APY that each provides and their name (a.k.a. their brand). We aim to solve this issue with Ptolemy by creating a reputation score that is tied to a validator's DID using data found both on and off chain.
This pain point was discovered as our club, being validators on many chains such as Evmos, wanted a way to earn more delegations through putting in more effort into pushing the community forward. After talking with other university blockchain clubs, we discovered that the space was seriously lacking the UI and data aggregation processes to correlate delegations with engagement and involvement in a community.
We confirmed this issue by realizing our shared experiences as users of these protocols: when deciding which validators to delegate our tokens to on Osmosis we really had no way of choosing between validators other than judging based on APY looking them up on Twitter to see what they did for the community.
## What it does
Ptolemy calculates a reputation score based on a number of factors and ties this score to validators on chain using Sonr's DID module. These factors include both on-chain and off-chain metrics. We fetch on-chain validator data Cosmoscan and assign each validator a reputation score based on number of blocks proposed, governance votes, amount of delegators, and voting power, and create and evaluate a Validator based on a mathematical formula that normalized data gives them a score between 0-5. Our project includes not only the equation to arrive at this score but also a web app to showcase what a delegation UI would look like when including this reputation score. We also include mock data that ties data from social media platforms to highlight a validator's engagement with the community, such as Reddit, Twitter, and Discord, although this carries less weight than other factors.
## How we built it
First, we started with a design doc, laying out all the features. Next, we built out the design in Figma, looking at different Defi protocols for inspiration. Then we started coding.
We built it using Sonr as our management system for DIDs, React, and Chakra for the front end, and the backend in GoLang.
## Challenges we ran into
Integrating the Sonr API was quite difficult, we had to hop on call with an Engineer from the team to work through the bug. We ended up having to use the GoLang API instead of the Flutr SDK. During the ideating phase, we had to figure out what off-chain data was useful for choosing between validators.
## Accomplishments that we're proud of
We are proud of learning a new technology stack from the ground up in the form of the Sonr DID system and integrating it into a much-needed application in the blockchain space. We are also proud of the fact that we focused on deeply understanding the validator reputation issue so that our solution would be comprehensive in its coverage.
## What we learned
We learned how to bring together diverse areas of software to build a product that requires so many different moving components. We also learned how to look through many sets of documentation and learn what we minimally needed to hack out what we wanted to build within the time frame. Lastly, we also learned to efficiently bring together these different components in one final product that justice to each of their individual complexities.
## What's next for Ptolemy
Ptolemy is named in honor of the eponymous 2nd Century scientist who generated a system to chart the world in the form of longitude/latitude which illuminated the geography world. In a similar way, we hope to bring more light to the decision making process of directing delegations. Beyond this hackathon, we want to include more important metrics such as validator downtime, jail time, slashing history, and history of APY over a certain time period. Given more time, we could have fetched this data from an indexing service similar to The Graph. We also want to flesh out the onboarding process for validators to include signing into different social media platforms so we can fetch data to determine their engagement with communities, rather than using mock data. A huge feature for the app that we didn't have time to build out was staking directly on our platform, which would have involved an integration with Keplr wallet and the staking contracts on each of the appchains that we chose.
Besides these staking related features, we also had many ideas to make the reputation score a bigger component of everyone's on chain identity. The idea of a reputation score has huge network effects in the sense that as more users and protocols use it, the more significance it holds. Imagine a future where lending protocols, DEXes, liquidity mining programs, etc. all take into account your on-chain reputation score to further align incentives by rewarding good actors and slashing malicious ones. As more protocols integrate it, the more power it holds and the more seriously users will manage their reputation score. Beyond this, we want to build out an API that also allows developers to integrate our score into their own decentralized apps.
All this is to work towards a future where Ptolemy will fully encapsulate the power of DID’s in order to create a more transparent world for users that are delegating their tokens.
Before launch, we need to stream in data from Twitter, Reddit, and Discord, rather than using mock data. We will also allow users to directly stake our platform. Then we need to integrate with different lending platforms to generate the Validator's "reputation-score" on-chain. Then we will launch on test net. Right now, we have the top 20 validators, moving forwards we will add more validators. We want to query, jail time, and slashing of validators in order to create a more comprehensive reputation score for the validator., Off-chain, we want to aggregate Discord, Reddit, Twitter, and community forum posts to see their contributions to the chain they are validating on. We also want to create an API that allows developers to use this aggregated data on their platform.
|
losing
|
## About Climate Connect
Nest makes your home adaptable, energy efficient, but Climate Connect allows you to truly customize and
tune your environment to your body.
Using data such as your sleeping pattern, Climate Connect is able to automatically control
heating and cooling of your environment so it is the most comfortable for you.
## Technologies
The Climate Connect backend is a Django server that links your body to the Nest API.
The wearable technology is a Pebble Time app that collects data on you and lets your Nest know when your
climate needs an adjustment.
|
## About the Project
**ClimaGuard** is born from a profound concern for our planet's future. Despite the growing awareness of climate change, many people are unsure how to contribute effectively. ClimaGuard aims to bridge this gap by transforming everyday actions into impactful climate solutions. By leveraging advanced technology, we empower individuals and communities to actively participate in climate protection, making it easy and rewarding to contribute to a healthier planet.
## Inspiration
The inspiration for **ClimaGuard** came from the realization that while everyone desires a sustainable future, the path to achieving it isn't always clear. Remote or overlooked areas often suffer from inadequate climate action and waste management, leading to significant environmental damage. We envisioned ClimaGuard as a solution that democratizes climate action, making it accessible to everyone, everywhere.
## What We Learned
Through this project, we gained valuable insights into:
* The complexities of integrating various technologies for a unified purpose.
* The importance of user-centered design in encouraging active participation.
* The challenges of handling real-time data and privacy concerns responsibly.
* Effective project management and the agile adaptation of plans to overcome unforeseen difficulties.
## How We Built It
We combined cutting-edge technology with user-friendly design:
* **Frontend**: Built with React to ensure a dynamic and responsive user interface.
* **Backend**: Utilized Django for high performance and efficient handling of large volumes of requests.
* **AI Integration**: Leveraged Amazon Bedrock Claude 3 to analyze waste images, determine waste types, urgency, and environmental impact.
* **User Authentication**: Implemented Propel Auth for secure and seamless access with options for passwordless entry or Google login.
* **Chatbot Integration**: Integrated with You.com chatbot to provide users with real-time assistance and support, enhancing user experience and engagement.
## Challenges We Faced
Our journey was filled with challenges, including:
* Difficulties in integrating Amazon Bedrock Claude 3 for accurate waste recognition.
* Issues with Propel Auth during user authentication.
* Hurdles in fine-tuning the AI for precise waste classification.
## Accomplishments We're Proud Of
We are proud of:
* Our AI integration that accurately classifies types of waste.
* The seamless user interface that encourages user engagement.
* Effective collaboration with local authorities to enhance cleanup efforts.
* Developing a reward system that motivates continuous user participation.
* Implementing a leaderboard to foster a sense of community and friendly competition among users.
* Rating users based on carbon emission reductions achieved through their actions, providing tangible impact metrics.
## What's Next for ClimaGuard
Looking ahead, we plan to:
* Expand the AI's capabilities and increase our geographic coverage.
* Integrate with smart city projects to enhance our impact.
* Support community clean-up events and develop partnerships with local businesses for a more impactful reward system.
* Enhance the leaderboard feature to include more metrics and rewards, driving greater user engagement and community involvement.
* Continue rating users on their carbon emission reductions to encourage sustainable behaviors and measure collective impact.
|
## Inspiration
Pianos are usually heavy and expensive, so a keyboard made from cheap and portable materials like paper can be very useful. That's why we came up with the paper piano idea. With a paper piano, it's natural to connect it to a computer and achieve multiple purposes like composing. While most composers rely on midi file export to view and make adjustment to their compositions, we wish to realize real-time visual representation of the music being played so people can view their music notes in real time.
## What it does
Pianeer is a highly innovated and convenient music composing software+hardware system. Multiple modes are available for different musical functions. It includes a paper piano which provides real experience of playing keyboard but can be easily rolled up and carried. The composing mode of the software provides real time visual representation of the music that's being played and exports a midi file. The play mode provides sheet music and keyboard highlights for practicing purpose. The practice mode allows users to practice their compositions casually without being recorded. Pianeer is extremely multi-functional and portable, and can be used for both beginners and mature composers.
## How we built it
We put electric paint on a hard paper and connect it to arduinos via wires. Using capacitive sensing library we are able to give input signals by simply putting fingers on the paper. We exported the input signal to our program to generate sound and realize other functions.
## Challenges we ran into
* It's very hard to accurately draw so many keys on the paper using electric paint since each key has to be clearly separated.
* Adjusting delay and accuracy of the paper piano is very challenging.
* Convert music input signals to visual representation.
* User Interface design.
## Accomplishments that we're proud of
* Lovely user interface that corresponds to the superhero theme.
* Pretty accurate sensing and sound generation of the paper piano.
* Real time visual representation of the music
* Various functions and modes we've accomplished.
## What we learned
Pygame, arduino, multi-threading
## What's next for Pianeer
Make a more delicate prototype of the paper piano and realize the player mode we have no time to accomplish this time.
|
losing
|
# Privileged
This app was made during nwHacks 2018.
## Proposal
A web application for conducting a ‘privilege walk’. The questions are tailored towards the tech community in North America.
## Social Issue
The Privilege of Not Understanding Privilege.
## Links
The demo app can be found here: [Privileged](http://www.privileged.tech)
* [Devpost](https://devpost.com/software/privileged)
* [Github](https://github.com/FlyteWizard/whatthetech)
---
### Resources
* <https://edge.psu.edu/workshops/mc/power/privilegewalk.shtml>
* <https://hackernoon.com/tech-your-privilege-at-the-door-5d8da0c41c6b>
* <https://www.psychologytoday.com/blog/feeling-our-way/201702/the-privilege-not-understanding-privilege>
### Contributors
* [Amy Hanvoravongchai](https://github.com/amyhanv)
* [Dominique Charlebois](https://github.com/FlyteWizard)
* [Macguire Rintoul](https://github.com/mrintoul)
* [Sophia Chan](https://github.com/schan27)
|
## Inspiration
The inspiration for this project was both personal experience and the presentation from Ample Labs during the opening ceremony. Last summer, Ryan was preparing to run a summer computer camp and taking registrations and payment on a website. A mother reached out to ask if we had any discounts available for low-income families. We have offered some in the past, but don't advertise for fear of misuse of the discounts by average or high-income families. We also wanted a way to verify this person's income. If we had WeProsper, verification would have been easy. In addition to the issues associated with income verification, it is likely that there are many programs out there (like the computer camps discounts) that low-income families aren't aware of. Ample Labs' presentation inspired us with the power of connecting people with services they should be able to access but aren't aware of. WeProsper would help low-income families be aware of the services available to them at a discount (transit passes, for another example) and verify their income easily in one place so they can access the services that they need without bundles of income verification paperwork. As such, WeProsper gives low-income families a chance to prosper and improve financial stability. By doing this, WeProsper would increase social mobility in our communities long-term.
## What it does
WeProsper provides a login system which allows users to verify their income by uploading a PDF of their notice of assessment or income proof documents from the CRA and visit service providers posted on the service with a unique code the service provider can verify with us to purchase the service. Unfortunately, not all of this functionality is implemented just yet. The login system works with Auth0, but the app mainly runs with dummy data otherwise.
## How We built it
We used Auth0, react, and UiPath to read the PDF doing our on-site demo. UiPath would need to be replaced in the future with a file upload on the site. The site is made with standard web technologies HTML, CSS and Javascript.
## Challenges We ran into
The team was working with technologies that are new to us, so a lot of the hackathon was spent learning these technologies. These technologies include UiPath and React.
## Accomplishments that we're proud of
We believe WeProsper has a great value proposition for both organizations and low-income families and isn't easy replicated with other solutions. We excited about the ability to share a proof-of-concept that could have massive social impact. Personally, we are also excited that every team member improved skills (technical and non-technical) that will be useful to them in the future.
## What we learned
The team learned a lot about React, and even just HTML/CSS. The team also learned a lot about how to share knowledge between team members with different backgrounds and experiences in order to develop the project.
## What's next for WeProsper
WeProsper would like to use AI to detect anomalies in the future when verifying income.
|
## Inspiration
Neuro-Matter is an integrated social platform designed to combat not one, but 3 major issues facing our world today: Inequality, Neurological Disorders, and lack of information/news.
We started Neuro-Matter with the aim of helping people facing the issue of Inequality at different levels of society. Though it was assumed that inequality only leads to physical violence, its impacts on neurological/mental levels are left neglected.
Upon seeing the disastrous effects, we have realized the need of this hour and have come up with Neuro-Matter to effectively combat these issues in addition to the most pressing issue our world faces today: mental health!
## What it does
1. "Promotes Equality" and provides people the opportunity to get out of mental trauma.
2. Provides a hate-free social environment.
3. Helps People cure the neurological disorder
4. Provide individual guidance to support people with the help of our volunteers.
5. Provides reliable news/information.
6. Have an AI smart chatbot to assist you 24\*7.
## How we built it
Overall, we used HTML, CSS, React.js, google cloud, dialogue flow, google maps Twilio's APIs. We used Google Firebase's Realtime Database to store, organize, and secure our user data. This data is used to login and sign up for the service. The service's backend is made with node.js, which is used to serve the webpages and enable many useful functions. We have multiple different pages as well like the home page, profile page, signup/login pages, and news/information/thought sharing page.
## Challenges we ran into
We had a couple of issues with databasing as the password authentication would work sometimes. Moreover, since we used Visual Studio multiplayer for the first time it was difficult as we faced many VSCode issues (not code related). Since we were working in the same time zones, it was not so difficult for all of us to work together, but It was hard to get everything done on time and have a rigid working module.
## Accomplishments that we're proud of
Overall, we are proud to create a working social platform like this and are hopeful to take it to the next steps in the future as well. Specifically, each of our members is proud of their amazing contributions.
We believe in the module we have developed and are determined to take this forward even beyond the hackathon to help people in real life.
## What we learned
We learned a lot, to say the least!! Overall, we learned a lot about databasing and were able to strengthen our React.js, Machine Learning, HTML, and CSS skills as well. We successfully incorporated Twilio's APIs and were able to pivot and send messages. We have developed a smart bot that is capable of both text and voice-based interaction. Overall, this was an extremely new experience for all of us and we greatly enjoyed learning new things. This was a great project to learn more about platform development.
## What's next for Neuro-Matter
This was an exciting new experience for all of us and we're all super passionate about this platform and can't wait to hopefully unveil it to the general public to help people everywhere by solving the issue of Inequality.
|
partial
|
## Inspiration
In a lot of mass shootings, there is a significant delay from the time at which police arrive at the scene, and the time at which the police engage the shooter. They often have difficulty determining the number of shooters and their location. ViGCam fixes this problem.
## What it does
ViGCam spots and tracks weapons as they move through buildings. It uses existing camera infrastructure, location tags and Google Vision to recognize weapons. The information is displayed on an app which alerts users to threat location.
Our system could also be used to identify wounded people after an emergency incident, such as an earthquake.
## How we built it
We used Raspberry Pi and Pi Cameras to simulate an existing camera infrastructure. Each individual Pi runs a Python script where all images taken from the cameras are then sent to our Django server. Then, the images are sent directly to Google Vision API and return a list of classifications. All the data collected from the Raspberry Pis can be visualized on our React app.
## Challenges we ran into
SSH connection does not work on the HackMIT network and because of this, our current setup involves turning one camera on before activating the second. In a real world situation, we would be using an existing camera network, and not our raspberry pi cameras to collect video data.
We also have had a difficult time getting consistent identification of our objects as weapons. This is largely because, for obvious reasons, we cannot bring in actual weapons. Up close however, we have consistent identification of team member items.
Using our current server set up, we consistently get server overload errors. So we have an extended delay between each image send. Given time, we would implement an actual camera network, and also modify our system so that it would perform object recognition on videos as opposed to basic pictures. This would improve our accuracy. Web sockets can be used to display the data collected in real time.
## Accomplishments that we’re proud of
1) It works!!! (We successfully completed our project in 24 hours.)
2) We learned to use Google Cloud API.
3) We also learned how to use raspberry pi. Prior to this, none on our team had any hardware experience.
## What we learned
1) We learned about coding in a real world environment
2) We learned about working on a team.
## What's next for ViGCam
We are planning on working through our kinks and adding video analysis. We could add sound detection for gunshots to detect emergent situations more accurately. We could also use more machine learning models to predict where the threat is going and distinguish between threats and police officers. The system can be made more robust by causing the app to update in real time. Finally, we would add the ability to use law enforcement emergency alert infrastructure to alert people in the area of shooter location in real time. If we are successful in these aspects, we are hoping to either start a company, or sell our idea.
|
## Inspiration
The inspiration for InstaPresent came from our frustration with constantly having to create presentations for class and being inspired by the 'in-game advertising' episode on Silicon Valley.
## What it does
InstaPresent is a tool that uses your computer's microphone to generate a presentation in real-time. It can retrieve images and graphs and summarize your words into bullet points.
## How we built it
We used Google's Text To Speech API to process audio from the laptop's microphone. The Text To Speech is captured when the user speaks and when they stop speaking, the aggregated text is sent to the server via WebSockets to be processed.
## Challenges We ran into
Summarizing text into bullet points was a particularly difficult challenge as there are not many resources available for this task. We ended up developing our own pipeline for bullet-point generation based on part-of-speech and dependency analysis. We also had plans to create an Android app for InstaPresent, but were unable to do so due to limited team members and time constraints. Despite these challenges, we enjoyed the opportunity to work on this project.
## Accomplishments that we're proud of
We are proud of creating a web application that utilizes a variety of machine learning and non-machine learning techniques. We also enjoyed the challenge of working on an unsolved machine learning problem (sentence simplification) and being able to perform real-time text analysis to determine new elements.
## What's next for InstaPresent
In the future, we hope to improve InstaPresent by predicting what the user intends to say next and improving the text summarization with word reordering.
|
## Inspiration
We wanted to create something that helped other people. We had so many ideas, yet couldn't stick to one. Luckily, we ended up talking to Phoebe(?) from Hardware and she talked about how using textiles would be great in a project. Something clicked, and we started brainstorming ideas. It ended up with us coming up with this project which could help a lot of people in need, including friends and family close to us.
## What it does
Senses the orientation of your hand, and outputs either a key press, mouse move, or a mouse press. What it outputs is completely up to the user.
## How we built it
Sewed a glove, attached a gyroscopic sensor, wired it to an Arduino Uno, and programmed it in C# and C++.
## Challenges we ran into
Limited resources because certain hardware components were out of stock, time management (because of all the fun events!), Arduino communication through the serial port
## Accomplishments that we're proud of
We all learned new skills, like sewing, coding in C++, and programming with the Arduino to communicate with other languages, like C#. We're also proud of the fact that we actually fully completed our project, even though it's our first hackathon.
## What we learned
~~how 2 not sleep lolz~~
Sewing, coding, how to wire gyroscopes, sponsors, DisguisedToast winning Hack the North.
## What's next for this project
We didn't get to add all the features we wanted, both to hardware limitations and time limitations. Some features we would like to add are the ability to save and load configs, automatic input setup, making it wireless, and adding a touch sensor to the glove.
|
winning
|
## Inspiration
On our way to PennApps, our team was hungrily waiting in line at Shake Shack while trying to think of the best hack idea to bring. Unfortunately, rather than being able to sit comfortably and pull out our laptops to research, we were forced to stand in a long line to reach the cashier, only to be handed a clunky buzzer that countless other greasy fingered customer had laid their hands on. We decided that there has to be a better way; a way to simply walk into a restaurant, be able to spend more time with friends, and to stand in line as little as possible. So we made it.
## What it does
Q'd (pronounced queued) digitizes the process of waiting in line by allowing restaurants and events to host a line through the mobile app and for users to line up digitally through their phones as well. Also, it functions to give users a sense of the different opportunities around them by searching for nearby queues. Once in a queue, the user "takes a ticket" which decrements until they are the first person in line. In the meantime, they are free to do whatever they want and not be limited to the 2-D pathway of a line for the next minutes (or even hours).
When the user is soon to be the first in line, they are sent a push notification and requested to appear at the entrance where the host of the queue can check them off, let them in, and remove them from the queue. In addition to removing the hassle of line waiting, hosts of queues can access their Q'd Analytics to learn how many people were in their queues at what times and learn key metrics about the visitors to their queues.
## How we built it
Q'd comes in three parts; the native mobile app, the web app client, and the Hasura server.
1. The mobile iOS application built with Apache Cordova in order to allow the native iOS app to be built in pure HTML and Javascript. This framework allows the application to work on both Android, iOS, and web applications as well as to be incredibly responsive.
2. The web application is built with good ol' HTML, CSS, and JavaScript. Using the Materialize CSS framework gives the application a professional feel as well as resources such as AmChart that provide the user a clear understanding of their queue metrics.
3. Our beast of a server was constructed with the Hasura application which allowed us to build our own data structure as well as to use the API calls for the data across all of our platforms. Therefore, every method dealing with queues or queue analytics deals with our Hasura server through API calls and database use.
## Challenges we ran into
A key challenge we discovered was the implementation of Cordova and its associated plugins. Having been primarily Android developers, the native environment of the iOS application challenged our skills and provided us a lot of learn before we were ready to properly implement it.
Next, although a less challenge, the Hasura application had a learning curve before we were able to really us it successfully. Particularly, we had issues with relationships between different objects within the database. Nevertheless, we persevered and were able to get it working really well which allowed for an easier time building the front end.
## Accomplishments that we're proud of
Overall, we're extremely proud of coming in with little knowledge about Cordova, iOS development, and only learning about Hasura at the hackathon, then being able to develop a fully responsive app using all of these technologies relatively well. While we considered making what we are comfortable with (particularly web apps), we wanted to push our limits to take the challenge to learn about mobile development and cloud databases.
Another accomplishment we're proud of is making it through our first hackathon longer than 24 hours :)
## What we learned
During our time developing Q'd, we were exposed to and became proficient in various technologies ranging from Cordova to Hasura. However, besides technology, we learned important lessons about taking the time to properly flesh out our ideas before jumping in headfirst. We devoted the first two hours of the hackathon to really understand what we wanted to accomplish with Q'd, so in the end, we can be truly satisfied with what we have been able to create.
## What's next for Q'd
In the future, we're looking towards enabling hosts of queues to include premium options for users to take advantage to skip lines of be part of more exclusive lines. Furthermore, we want to expand the data analytics that the hosts can take advantage of in order to improve their own revenue and to make a better experience for their visitors and customers.
|
## Inspiration 🔥
While on the way to CalHacks, we drove past a fire in Oakland Hills that had started just a few hours prior, meters away from I-580. Over the weekend, the fire quickly spread and ended up burning an area of 15 acres, damaging 2 homes and prompting 500 households to evacuate. This served as a harsh reminder that wildfires can and will start anywhere as long as few environmental conditions are met, and can have devastating effects on lives, property, and the environment.
*The following statistics are from the year 2020[1].*
**People:** Wildfires killed over 30 people in our home state of California. The pollution is set to shave off a year of life expectancy of CA residents in our most polluted counties if the trend continues.
**Property:** We sustained $19b in economic losses due to property damage.
**Environment:** Wildfires have made a significant impact on climate change. It was estimated that the smoke from CA wildfires made up 30% of the state’s greenhouse gas emissions. UChicago also found that “a single year of wildfire emissions is close to double emissions reductions achieved over 16 years.”
Right now (as of 10/20, 9:00AM): According to Cal Fire, there are 7 active wildfires that have scorched a total of approx. 120,000 acres.
[[1] - news.chicago.edu](https://news.uchicago.edu/story/wildfires-are-erasing-californias-climate-gains-research-shows)
## Our Solution: Canary 🐦🚨
Canary is an early wildfire detection system powered by an extensible, low-power, low-cost, low-maintenance sensor network solution. Each sensor in the network is placed in strategic locations in remote forest areas and records environmental data such as temperature and air quality, both of which can be used to detect fires. This data is forwarded through a WiFi link to a centrally-located satellite gateway computer. The gateway computer leverages a Monogoto Satellite NTN (graciously provided by Skylo) and receives all of the incoming sensor data from its local network, which is then relayed to a geostationary satellite. Back on Earth, we have a ground station dashboard that would be used by forest rangers and fire departments that receives the real-time sensor feed. Based on the locations and density of the sensors, we can effectively detect and localize a fire before it gets out of control.
## What Sets Canary Apart 💡
Current satellite-based solutions include Google’s FireSat and NASA’s GOES satellite network. These systems rely on high-quality **imagery** to localize the fires, quite literally a ‘top-down’ approach. Google claims it can detect a fire the size of a classroom and notify emergency services in 20 minutes on average, while GOES reports a latency of 3 hours or more. We believe these existing solutions are not effective enough to prevent the disasters that constantly disrupt the lives of California residents as the fires get too big or the latency is too high before we are able to do anything about it. To address these concerns, we propose our ‘bottom-up’ approach, where we can deploy sensor networks on a single forest or area level and then extend them with more sensors and gateway computers as needed.
## Technology Details 🖥️
Each node in the network is equipped with an Arduino 101 that reads from a Grove temperature sensor. This is wired to an ESP8266 that has a WiFi module to forward the sensor data to the central gateway computer wirelessly. The gateway computer, using the Monogoto board, relays all of the sensor data to the geostationary satellite. On the ground, we have a UDP server running in Google Cloud that receives packets from the satellite and is hooked up to a Streamlit dashboard for data visualization.
## Challenges and Lessons 🗻
There were two main challenges to this project.
**Hardware limitations:** Our team as a whole is not very experienced with hardware, and setting everything up and getting the different components to talk to each other was difficult. We went through 3 Raspberry Pis, a couple Arduinos, different types of sensors, and even had to fashion our own voltage divider before arriving at the final product. Although it was disheartening at times to deal with these constant failures, knowing that we persevered and stepped out of our comfort zones is fulfilling.
**Satellite communications:** The communication proved to be tricky due to inconsistent timing between sending and receiving the packages. We went through various socket ids and ports to see if there were any patterns to the delays. Through our thorough documentation of steps taken, we were eventually able to recognize a pattern in when the packages were being sent and modify our code accordingly.
## What’s Next for Canary 🛰️
As we get access to better sensors and gain more experience working with hardware components (especially PCB design), the reliability of our systems will improve. We ran into a fair amount of obstacles with the Monogoto board in particular, but as it was announced as a development kit only a week ago, we have full faith that it will only get better in the future. Our vision is to see Canary used by park services and fire departments in the most remote areas of our beautiful forest landscapes in which our satellite-powered sensor network can overcome the limitations of cellular communication and existing fire detection solutions.
|
## Inspiration
Our inspiration comes from the recent forest fires which are devastating the country of Australia, destroying the habitats of many animals. Through our app, we want to raise awareness to the effects that climate change can have on the ecosystem. Fauna Fund is an interactive ecosystem that the user can draw in real life and have their favourite animals come to life in the virtual world.
## What it does
Fauna Fund takes a picture of a drawing of on a piece of paper and renders game objects in a virtual environment that the user can then interact with and learn about.
## How I built it
We built this app using Unity and C#.
## Challenges I ran into
Throughout the development of this project, all four group members ran into different challenges. Some include, 3D movement and animations with prefabs, linking android camera to Unity, and optimizing the search and classification algorithm.
## Accomplishments that I'm proud of
In general, we are proud that we were able to bring this idea to life. For most of us, it was our first time using Unity's systems to build an app and especially exploring the world of game dev an exciting experience for us all. We hope that our application can help bring awareness to some of the most beautiful ecosystems on earth and drive conservation efforts to keep them around for years to come.
## What's next for Fauna Fund
Further improvements would allow the user to have more options to interact with their virtual ecosystem. Options like feeding, information bios, and links to conservation efforts would all improve the user experience and promote our cause. Additionally we would like to further improve our searching and classifying algorithm to accept a wider range of designs with greater precision.
|
winning
|
## Inspiration
We are inspired by the story of people who have experienced sexual harassment in their life. The people who decided to bear the pain inside and stay silent because they didn't want to loose their job, they didn't want consequences and they didn't want further assault.
## What it does
Victims of assault can report the incident online or through phone without going to police. The system will then analyse the report (input data) and match it to existing data in the database. If any similar record found, we will inform all the victims and will connect them with an attorney for consultation and further actions. This way, victims can have allies and as a result, they have higher chance of winning the case at the courtroom. Our goal is to collect data about sexual harassment and perform analysis and eventually provide training to enhance the quality of the society. This platform demonstrates assault rate analysis on heat map which eventually allows companies, organizations and business owners to provide training to enhance the environment and enhance the society in overall.
## How I built it
We developed a web application using the cross platform technologies such as Gonative.io, google place APIs, bootstrap and jQuery. We used Machine Learning models and NLP to match the profiles and computer vision to identify the images of the perpetrators. We also used Houndify and Twilio to enable voice operation on phone to enable individual who don't have access to internet or smart phones to report the case. Oracle cloud is used as the server.
## Challenges I ran into
Oracle Cloud setup and Houndify setup was a challenge for us as it was our first time using these technologies but it was a good learning curve and we successfully managed to solve all the problems with the help of the respective teams.
## Accomplishments that I'm proud of
We are proud of achieving a workable system with great features in really a short time.
## What I learned
We learned how to use voice APIs.
## What's next for MeeToo
we would like to enhance this system with more Machine Learning algorithms to increase the accuracy of profile matching and perform data analytics to understand more on the health of the societies specially at work places and give recommendations on how to train the population in order to build a better future by having more educated individuals around.
**This is an ethically engaged hack because:**
"The most important ethical consideration while developing this technology was to remain neutral and unbiased. Although according to statistics, most sexually harassed gender is female in work places but our technology is gender neutral. We had to ensure that the machine learning models that we developed are not biased towards a specific gender which usually happens when you have biased data.For example, given the set of input features i.e, hair color, eye color, age, height, weight, it should not match people with specific ethnicity more than others.Currently, most of the machine learning models are facing brassiness issues like, one of the models developed by researchers categorized against black people in crimes more than people with other ethnicity.This is very important in our project because it's our core technology which should be ethically right in matching the profiles."
|
## Inspiration
Most of the team are students from UBC and in recent months, it has come to light that UBC has mishandled several reports of sexual harassment and assault. This is not an isolated situation. Across Canada, sexual assault on campus is vastly underreported, both by survivors and by schools.
## What it does
The web app is intended to advocate transparency in regards to sexual violence on Canadian campuses. It gives a platform for student evaluation of schools based on their handling of sexual assault and harassment cases and provides a medium for survivors to anonymously report and share reports of sexual violence.
## How we built it
Through hacking a Ruby on Rails tutorial
## Challenges we ran into
* Installing the actual Ruby on Rails software was a massive challenge; our team met on the day of nwHacks and came together to build this web app. However, installation was 90% debugging and 10% actual hacking.
* Understanding the code beyond the tutorial
## Accomplishments that we're proud of
Installing Ruby on Rails, gaining familiarity with debugging repeating errors
Forming a team!
Completing our first hackathon (4/5 members)
Creating a web app that contributes to social justice and issues rampant in educational institutions
## What we learned
* How to use command line! (one of our members is completely new to back-end hacking)
* How to use and hack in Ruby on Rails (and kinda in Ruby)
* Data mining on actual reports of sexual assaults in campuses
## What's next for Hypo-fish
* expand its functionality by including rating, rankings, and comments
* advocate for transparency by publishing statistical data
* guarantee anonymity and privacy to users who submit reports
|
## Inspiration
**Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness.
## Problem
Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in.
Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting.
## What is fairness?
There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group.
## What our app does
**jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness.
### Reweighing Algorithm
If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training.
## How we built it
We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier.
## Challenges we ran into
Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric.
## Accomplishments that we're proud of
We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her.
## What we learned
Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts.
## What's next for jobFAIR
Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics.
|
losing
|
## Inspiration
We thought it would be nice if, for example, while working in the Computer Science building, you could send out a little post asking for help from people around you.
Also, it would also enable greater interconnectivity between people at an event without needing to subscribe to anything.
## What it does
Users create posts that are then attached to their location, complete with a picture and a description. Other people can then view it two ways.
**1)** On a map, with markers indicating the location of the post that can be tapped on for more detail.
**2)** As a live feed, with the details of all the posts that are in your current location.
The posts don't last long, however, and only posts within a certain radius are visible to you.
## How we built it
Individual pages were built using HTML, CSS and JS, which would then interact with a server built using Node.js and Express.js. The database, which used cockroachDB, was hosted with Amazon Web Services. We used PHP to upload images onto a separate private server. Finally, the app was packaged using Apache Cordova to be runnable on the phone. Heroku allowed us to upload the Node.js portion on the cloud.
## Challenges we ran into
Setting up and using CockroachDB was difficult because we were unfamiliar with it. We were also not used to using so many technologies at once.
|
## Inspiration
In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol.
## What it does
Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating.
## How I built it
We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API.
## Challenges I ran into
Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of!
## Accomplishments that I'm proud of
We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity.
## What I learned
Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane.
## What's next for SafeHubs
Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
|
## Inspiration
All of us have friends or know of someone who has ADHD. ADHD is characterized by being easily distracted, especially by sounds in our everyday life such as a cough or car honk. From this idea of noise distractions, we decided to try creating noise-cancelling headphones for anyone just wanting to focus (because anyone can be distracted at times).
## What it does
Focus is a web application that controls your headphones. Cancels noise and adjusts volume of certain sounds depending on the environment/activity you want to focus in. For example, Focus allows you to hear traffic while you're jogging but not the conversations of your neighbors.
## How we built it
We used pyAudioAnalysis to isolate sounds. Through machine-learning we were able to recognize the sounds of voices, music, alarms, clapping, and more. We used HTML, CSS, and JS for the frontend web app. We spent time on prototyping with Photoshop and PowerPoint.
## Challenges we ran into
Realtime noise-cancellation is tricky to implement because we need to first listen and identify the sound before being able to cancel it. We tried using Raspberry Pi but had problems with installation and ultimately did not get it to work at the end. :(
## Accomplishments that we're proud of
We are proud that we figured out how to use pyAudioAnalysis and got the CSS to be nice. We also met each other during team-building and are proud of our collaboration to pull this off.
## What we learned
We learned how to use pyAudioAnalysis and work with audio in our hardware hack. This was Karisa's very first hardware hack and she learned more about Raspberry Pi even though the Raspberry Pi did not work in the end. We all learned more of CSS using W3Schools. Md, Suparit, and Yunqi learned more about Material Design and UX practices.
## What's next for Focus
Improvements to realtime noise-cancellation through Bluetooth and Raspberry Pi. We would like to also give users more customizability in their audio environments. Market research into what is the high value-added that the mass and ADHD population would like.
|
partial
|
## Inspiration
We were all interested in emotional analysis and we felt that this was really cool and effective tool for anyone to use. This tool can be used by anyone who is trying to make a better presentation, be that a student, public-speaker, or actor.
## What it does
We have built two separate but related tools that can work together to help people make the most compelling presentation possible. For someone who is trying to write a document of some sort, we have the Live Sentiment Analysis tool. This is a web app where someone can edit their work and hone in on a targeted emotional impact.
The Sentiment Analysis tool uses Watson NLP API to get document-level and sentence/clause level analysis of the emotional content of the text. We provide regular feedback and updates on the overall and more specific emotional content of a document, as well as how your edits are changing that emotional content. The second tool is used to help people master the audio portion of the presentation. Anyone who wants to use the tool can record an audio file and upload it. We use Google Voice API to extract the text data from this recording. Then, we send this text data to the Watson API to perform sentiment analysis on top of each clause in the presentation. We also analyze audio data from the mp3 file with the DeepAffects model which recognizes the emotional content of speech without incorporating information about what the words being spoken are. Then we compare the clause-level emotional tags from the text and the audio data to see whether the person is really able to match his voice to his words and captivate his audience.
## How we built it
## Challenges we ran into
We had some challenges integrating APIs and integrating the frontend and backend. Other big issues were making the product as effective as possible. For example, we worked on the scoring function so that it would provide good results. The function combined data from the text-analysis and the audio analysis and we had to determine a way of combining this data in a way that reasonably represented the quality of someone's speech.
Another minor challenge was choosing an effective way to get the clauses. For the real-time text analysis, we had to get the clauses so that they had enough words that they would represent meaningful emotional data, but they were small enough that the user could get frequent feedback.
## Accomplishments that we're proud of
We ran audio analysis using a deep neural network. The training data is really hard to find, and we built upon previous models like DeepAffects. Extending the classic neutral, positive and negative tagging, we are proud of the emotions and scores based on a complex algorithm that our application predicts from your voice and maps it to the actual sentiment using IBM Watson.
## What we learned
We understood that there is not much publically available data for audio training, hence it was important to build upon previously trained models and use RESTful APIs wherever possible. At the same time, we learnt microservices, commuication between client and server, using jquery and javascript (as back-end engineers).
## What's next for Hearo
We want to keep building the product that we have started at CalHacks 6.0. Newer features could be live emotion tagging from the microphone, reducing latency, improving accuracy of the models used.
|
## Inspiration
The Future of Computer Science Computer Science, Software Engineering and Information Systems are international qualifications, enabling people to work globally, and in a very broad variety of roles. Through this hackathon, we wanted to bridge the gap between men and machine in a creative way, so we decided to create a project that can make computers understand not only the literal meanings of our language but also the hidden emotions underneath them.
## What it does
Our project takes in an recording and detects the emotion of the speaker. This is done by analysis of audio and analysis of the language used. Thus, our app can detect the language tone and emotional tone. Furthermore, a text input can also be analyzed for language tone.
## How we built it
We utilized Bootstrap to create the website and Google Charts to represent data. For the backend, we used IBM Watson for language tone analysis and speech to text and Vokaturi to analyze emotional tone. The Node.js server built on the express framework is used to call the IBM Watson APIs and a Python script is used to interface with Vokaturi.
## Challenges we ran into
We had to learn CSS and Javascript to overcome some technical difficulties when working on the front end, we also ran into problems deploying our Node.js server on the backend. Time constraints limited the amount and type of graphs we could show, otherwise, we would of wanted more informative charts to represent our output.
## Accomplishments that we're proud of
Exploring Vokaturi and IBM Watson was very rewarding as we got to implement the basic features of both while also getting a glimpse of their full potential.
Although using Bootstrap and Javascript were frustrating, we are proud of how much we learned and the final product we were able to create using them.
## What we learned
We learned how to use Vokaturi, IBM Watson and Bootstrap. Beyond these, we have also learned to embrace our challenges with passion, stamina, and willingness to learn and improve ourselves.
## What's next?
We envision expanding our app to respond to real world applications. To do this, we need improve use of analysis apis. IBM Watson's speech to text can differentiate between different speakers, and Vokaturi can be used to analyze smaller portions of the entire audio file. Inclusion of these features would allow us to detect the emotion for each speaker at a specific time. To represent these improvements, we will also need to make our data visualization more robust by using more informative graphs.
|
## Inspiration
Our biggest inspiration came from our grandparents, who often felt lonely and struggled to find help. Specifically, one of us have a grandpa with dementia. He lives alone and finds it hard to receive help since most of his relatives live far away and he has reduced motor skills. Knowing this, we were determined to create a product -- and a friend -- that would be able to help the elderly with their health while also being fun to be around! Ted makes this dream a reality, transforming lives and promoting better welfare.
## What it does
Ted is able to...
* be a little cutie pie
* chat with speaker, reactive movements based on conversation (waves at you when greeting, idle bobbing)
* read heart rate, determine health levels, provide help accordingly
* Drives towards a person in need through using the RC car, utilizing object detection and speech recognition
* dance to Michael Jackson
## How we built it
* popsicle sticks, cardboards, tons of hot glue etc.
* sacrifice of my fingers
* Play.HT and Claude 3.5 Sonnet
* YOLOv8
* AssemblyAI
* Selenium
* Arudino, servos, and many sound sensors to determine direction of speaker
## Challenges we ran into
Some challenges we ran into during development was making sure every part was secure. With limited materials, we found that materials would often shift or move out of place after a few test runs, which was frustrating to keep fixing. However, instead of trying the same techniques again, we persevered by trying new methods of appliances, which eventually led a successful solution!
Having 2 speech-to-text models open at the same time showed some issue (and I still didn't fix it yet...). Creating reactive movements was difficult too but achieved it through the use of keywords and a long list of preset moves.
## Accomplishments that we're proud of
* Fluid head and arm movements of Ted
* Very pretty design on the car, poster board
* Very snappy response times with realistic voice
## What we learned
* power of friendship
* don't be afraid to try new things!
## What's next for Ted
* integrating more features to enhance Ted's ability to aid peoples' needs --> ex. ability to measure blood pressure
|
losing
|
## When I was in my second year of college I met with an accident and unfortunately broke my leg. I had a lot of difficulty walking and I was sent to a rehabilitation center to see how people without legs cope with this situation. Seeing the situation of these amputees I felt that as an engineering student I can contribute something to this cause.During the hackathon, I thought of making a powered prosthetic leg to help amputees around the world so they feel like they have a natural limb. I thought of using machine learning techniques to improve the efficiency of the model and make it more adaptive.
## What it does
We thought of using machine learning to learn the walking pattern of an individual.
We first used myoelectric sensors on the leg to get datasets.
We then used Random Forest ML algorithm to train our Arduino microcontroller.
We further used various adaptive techniques to make the system more efficient.
We also designed the prosthetic leg and stimulated its working.
## How we built it
We used sensor inputs from IMUs, piezo and EMG sensors and fed this input to a Machine learning algorithm from a microcontroller. The ML model classifies the current phase of walking cycle(gait phase) and initiates control to the actuator for the next gait phase. We are also using adaptive algorithms to make the model adapt to change in speed of walking or change in motion(walking, climbing, running) and to sudden falls or jerks during motion. We have created a design for the leg and have simulated the leg with real-time data. We have implemented all the abovesaid algorithms on the data we acquired from sensors.
## Challenges we ran into
The prosthetic leg was-
Unable to adapt to change in speed
Unable to adjust to sudden jerks or movements
Unable to distinguish between different types of movement like walking, climbing etc.
Prosthetic doesn’t adjust to the individual's movement pattern, in fact, it is the other way round
## Accomplishments that we're proud of
We got real-time walking data for the model from a hackMIT volunteer's leg. (We thank Jessica for helping us out)
We have successfully implemented the machine learning and adaptive algorithms. We also simulated a model of the leg for inputs we acquired from sensors.
## What we learned
We learned a lot about machine learning and its application in prosthetic legs
## What's next for Bionic Leg
Manufacturing the leg
Integrating sensors
Initial Prototyping
Testing using pilots
Pilot specific optimisation
Final prototype
Product
Market launch
|
## Inspiration
We wanted to create something that helped other people. We had so many ideas, yet couldn't stick to one. Luckily, we ended up talking to Phoebe(?) from Hardware and she talked about how using textiles would be great in a project. Something clicked, and we started brainstorming ideas. It ended up with us coming up with this project which could help a lot of people in need, including friends and family close to us.
## What it does
Senses the orientation of your hand, and outputs either a key press, mouse move, or a mouse press. What it outputs is completely up to the user.
## How we built it
Sewed a glove, attached a gyroscopic sensor, wired it to an Arduino Uno, and programmed it in C# and C++.
## Challenges we ran into
Limited resources because certain hardware components were out of stock, time management (because of all the fun events!), Arduino communication through the serial port
## Accomplishments that we're proud of
We all learned new skills, like sewing, coding in C++, and programming with the Arduino to communicate with other languages, like C#. We're also proud of the fact that we actually fully completed our project, even though it's our first hackathon.
## What we learned
~~how 2 not sleep lolz~~
Sewing, coding, how to wire gyroscopes, sponsors, DisguisedToast winning Hack the North.
## What's next for this project
We didn't get to add all the features we wanted, both to hardware limitations and time limitations. Some features we would like to add are the ability to save and load configs, automatic input setup, making it wireless, and adding a touch sensor to the glove.
|
## Inspiration
GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers!
## What it does
The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which
we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not.
## How we built it
We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs.
## Challenges we ran into
For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon.
## Accomplishments that we're proud of
Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of.
## What we learned
We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord.
## What's next for Geodude?
Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
|
losing
|

## Inspiration
We've found that when we are developing an app at lightning speed, it can be a struggle to keep team members and beta testers on the newest build. Existing solutions entail a lot of friction, requiring manual steps to install updated builds of apps and hours or days of approval time for new builds. When testing on a large fleet of devices, a solution that works instantly and completely invisibly in the background would be a huge improvement. That's what we built with Slipstream.
## What it does
Slipstream continually delivers updated app builds to enrolled iOS and Android devices. As soon as a developer pushes a new commit to a designated git branch, the app is built automatically on an Azure cloud server. A push notification is sent to all devices, which instantly install the new build of the app without any user interaction required.
## How we built it
Slipstream uses git and Jenkins to receive new commits and fire off builds for Android and iOS. Once the app is built and all tests have passed, it uses Parse to send a push notification to the Slipstream updater on all enrolled iOS and Android devices. The Slipstream updater seamlessly installs the new version of app within seconds, completely invisible to the user.
## Challenges we ran into
iOS is naturally a very locked-down environment. Hours after the Android updater was working end-to-end, we were still struggling to get the iOS updater to achieve our stated goals. Thanks to the help of fellow PennApps hacker and iOS expert Conrad Kramer, we were able to reverse-engineer a private, hidden API on iOS that did exactly what we wanted!
## Accomplishments that we are proud of
We believe this is the first truly automatic solution for continually updating apps on iOS outside of the App Store. By leveraging private APIs - information that isn't available anywhere on the web - we can do what existing solutions can't.
## What we learned
Many behind-the-scenes tricks on iOS to make it do things it was never intended to do :)
## What's next for Slipstream
We want to make Slipstream available to any team to eliminate the friction of updating apps on development and test devices. This will involve writing clear and complete documentation and building out our web service to allow anyone to Slipstream their app.
|
# Aqueduct
 [](https://opensource.org/licenses/MIT)
[](http://makeapullrequest.com)
## Inspiration
The convenience of the internet has become essential to us in recent years. Despite this, billions of people still do not have access to the internet on-the-go. Our SMS client allows these users to access the current news, weather, stocks, as well as past encyclopedia knowledge and even perform google searches through only text messages.
## What it does
Allows users without internet access to retrieve concise information on current news, weather, stocks, encyclopedia knowledge and more through text messages.
## How we built it
The SMS client is fundamentally built upon Twilio, Node.Js, Express, RiveScript and MongoDB. This allowed us to set up a webhook that Twilio would interact with while having a dynamic chat using user sessions, allowing us to expand beyond a simple command interface and allowed for conversations with the bot.
## Challenges we ran into
Our biggest hurdles were a slow host server speed when inputting and outputting text messages, as well as unfamiliarity with the language and environment to some. Despite the learning curve, we worked hard to adapt to new and unknown challenges under a time restraint. In the end, we managed to learn how to deploy the client onto Google Cloud to speed up the server, and also gained more in-depth knowledge in javascript listeners.
## Accomplishments that we're proud of
All team members were very fast to adapt to any problems and hurdles, and learnt and applied new material very quickly. Additionally, communication was very efficient and concise, resulting in no conflicts during teamwork.
## What we learned
* Gained a more in-depth understanding of javascript listeners
* Deploy google cloud servers
* Knowledge in the stream-lining process when working in a group on git
|
## Inspiration
Companies lack insight into their users, audiences, and marketing funnel.
This is an issue I've run into on many separate occasions. Specifically,
* while doing cold marketing outbound, need better insight onto key variables of successful outreach
* while writing a blog, I have no idea who reads it
* while triaging inbound, which users do I prioritize
Given a list of user emails, Cognito scrapes the internet finding public information about users and the companies they work at. With this corpus of unstructured data, Cognito allows you to extract any relevant piece of information across users. An unordered collection of text and images becomes structured data relevant to you.
## A Few Example Use Cases
* Startups going to market need to identify where their power users are and their defining attributes. We allow them to ask questions about their users, helping them define their niche and better focus outbound marketing.
* SaaS platforms such as Modal have trouble with abuse. They want to ensure people joining are not going to abuse it. We provide more data points to make better judgments such as taking into account how senior of a developer a user is and the types of companies they used to work at.
* VCs such as YC have emails from a bunch of prospective founders and highly talented individuals. Cognito would allow them to ask key questions such as what companies are people flocking to work at and who are the highest potential people in my network.
* Content creators such as authors on Substack looking to monetize their work have a much more compelling case when coming to advertisers with a good grasp on who their audience is.
## What it does
Given a list of user emails, we crawl the web, gather a corpus of relevant text data, and allow companies/creators/influencers/marketers to ask any question about their users/audience.
We store these data points and allow for advanced querying in natural language.
[video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0)
## How we built it
we orchestrated 3 ML models across 7 different tasks in 30 hours
* search results person info extraction
* custom field generation from scraped data
* company website details extraction
* facial recognition for age and gender
* NoSQL query generation from natural language
* crunchbase company summary extraction
* email extraction
This culminated in a full-stack web app with batch processing via async pubsub messaging. Deployed on GCP using Cloud Run, Cloud Functions, Cloud Storage, PubSub, Programmable Search, and Cloud Build.
## What we learned
* how to be really creative about scraping
* batch processing paradigms
* prompt engineering techniques
## What's next for Cognito
1. predictive modeling and classification using scraped data points
2. scrape more data
3. more advanced queries
4. proactive alerts
[video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0)
|
losing
|
## Inspiration ⚡️
Given the ongoing effects of COVID-19, we know lots of people don't want to spend more time than necessary in a hospital. We wanted to be able to skip a large portion of the waiting process and fill out the forms ahead of time from the comfort of our home so we came up with the solution of HopiBot.
## What it does 📜
HopiBot is an accessible, easy to use chatbot designed to make the process of admitting patients more efficient — transforming basic in person processes to a digital one, saving not only your time, but the time of the doctors and nurses as well. A patient will use the bot to fill out their personal information and once they submit, the bot will use the inputted mobile phone number to send a text message with the current wait time until check in at the nearest hospital to them. As pandemic measures begin to ease, HopiBot will allow hospitals to socially distance non-emergency patients, significantly reducing exposure and time spent around others, as people can enter the hospital at or close to the time of their check in. In addition, this would reduce the potential risks of exposure (of COVID-19 and other transmissible airborne illnesses) to other hospital patients that could be immunocompromised or more vulnerable.
## How we built it 🛠
We built our project using HTML, CSS, JS, Flask, Bootstrap, Twilio API, Google Maps API (Geocoding and Google Places), and SQLAlchemy. HTML, CSS/Bootstrap, and JS were used to create the main interface. Flask was used to create the form functions and SQL database. The Twilio API was used to send messages to the patient after submitting the form. The Google Maps API was used to send a Google Maps link within the text message designating the nearest hospital.
## Challenges we ran into ⛈
* Trying to understand and use Flask for the first time
* How to submit a form and validate at each step without refreshing the page
* Using new APIs
* Understanding how to use an SQL database from Flask
* Breaking down a complex project and building it piece by piece
## Accomplishments that we're proud of 🏅
* Getting the form to work after much deliberation of its execution
* Being able to store and retrieve data from an SQL database for the first time
* Expanding our hackathon portfolio with a completely different project theme
* Finishing the project within a tight time frame
* Using Flask, the Twilio SMS API, and the Google Maps API for the first time
## What we learned 🧠
Through this project, we were able to learn how to break a larger-scale project down into manageable tasks that could be done in a shorter time frame. We also learned how to use Flask, the Twilio API, and the Google Maps API for the first time, considering that it was very new to all of us and this was the first time we used them at all. Finally, we learned a lot about SQL databases made in Flask and how we could store and retrieve data, and even try to present it so that it could be easily read and understood.
## What's next for HopiBot ⏰
* Since we have created the user side, we would like to create a hospital side to the program that can take information from the database and present all the patients to them visually.
* We would like to have a stronger validation system for the form to prevent crashes.
* We would like to implement an algorithm that can more accurately predict a person’s waiting time by accounting for the time it would take to get to the hospital and the time a patient would spend waiting before their turn.
* We would like to create an AI that is able to analyze a patient database and able to predict wait times based on patient volume and appointment type.
* Along with a hospital side, we would like to send update messages that warns patients when they are approaching the time of their check-in.
|
## Inspiration
Many hackers cast their vision forward, looking for futuristic solutions for problems in the present. Instead, we cast our eyes backwards in time, looking to find our change in restoration and recreation. We were drawn to the ancient Athenian Agora -- a marketplace; not one where merchants sold goods, but one where thinkers and orators debated, discussed, and deliberated (with one another?) pressing social-political ideas and concerns. The foundation of community engagement in its era, the premise of the Agora survived in one form or another over the years in the various public spaces that have been focal points for communities to come together -- from churches to community centers.
In recent years, however, local community engagement has dwindled with the rise in power of modern technology and the Internet. When you're talking to a friend on the other side of the world, you're not talking a friend on the other side of the street. When you're organising with activists across countries, you're not organising with activists in your neighbourhood. The Internet has been a powerful force internationally, but Agora aims to restore some of the important ideas and institutions that it has left behind -- to make it just as powerful a force locally.
## What it does
Agora uses users' mobile phone's GPS location to determine the neighbourhood or city district they're currently in. With that information, they may enter a chat group specific to that small area. Having logged-on via Facebook, they're identified by their first name and thumbnail. Users can then chat and communicate with one another -- making it easy to plan neighbourhood events and stay involved in your local community.
## How we built it
Agora coordinates a variety of public tools and services (for something...). The application was developed using Android Studio (Java, XML). We began with the Facebook login API, which we used to distinguish and provide some basic information about our users. That led directly into the Google Maps Android API, which was a crucial component of our application. We drew polygons onto the map corresponding to various local neighbourhoods near the user. For the detailed and precise neighbourhood boundary data, we relied on StatsCan's census tracts, exporting the data as a .gml and then parsing it via python. With this completed, we had almost 200 polygons -- easily covering Hamilton and the surrounding areas - and a total of over 50,000 individual vertices. Upon pressing the map within the borders of any neighbourhood, the user will join that area's respective chat group.
## Challenges we ran into
The chat server was our greatest challenge; in particular, large amounts of structural work would need to be implemented on both the client and the server in order to set it up. Unfortunately, the other challenges we faced while developing the Android application diverted attention and delayed process on it. The design of the chat component of the application was also closely tied with our other components as well; such as receiving the channel ID from the map's polygons, and retrieving Facebook-login results to display user identification.
A further challenge, and one generally unexpected, came in synchronizing our work as we each tackled various aspects of a complex project. With little prior experience in Git or Android development, we found ourselves quickly in a sink-or-swim environment; learning about both best practices and dangerous pitfalls. It was demanding, and often-frustrating early on, but paid off immensely as the hack came together and the night went on.
## Accomplishments that we're proud of
1) Building a functioning Android app that incorporated a number of challenging elements.
2) Being able to make something that is really unique and really important. This is an issue that isn't going away and that is at the heart of a lot of social deterioration. Fixing it is key to effective positive social change -- and hopefully this is one step in that direction.
## What we learned
1) Get Git to Get Good. It's incredible how much of a weight of our shoulders it was to not have to worry about file versions or maintenance, given the sprawling size of an Android app. Git handled it all, and I don't think any of us will be working on a project without it again.
## What's next for Agora
First and foremost, the chat service will be fully expanded and polished. The next most obvious next step is towards expansion, which could be easily done via incorporating further census data. StatsCan has data for all of Canada that could be easily extracted, and we could rely on similar data sets from the U.S. Census Bureau to move international. Beyond simply expanding our scope, however, we would also like to add various other methods of engaging with the local community. One example would be temporary chat groups that form around given events -- from arts festivals to protests -- which would be similarly narrow in scope but not constrained to pre-existing neighbourhood definitions.
|
## Inspiration
We were motivated by the business need of a meeting cost and analysis tool.
## What it does
Takes salary levels as input and computes cost of meetings for the company based on participants and length of time of the meeting. Aggregates info and displays visualizations by team or role.
## How we built it
Created a web UI integrating AWS Services such as DynamoDB, Lambda, and S3 in conjunction with Python scripts
## Challenges we ran into
Occasional internet connectivity challenges
## Accomplishments that we're proud of
Minimum product achieved
## What we learned
We enjoyed learning about full stack web development
## What's next for Meeting Cost Analysis Tool
Customer adoption, implementation, and customization
|
winning
|
## Inspiration
I dreamed about the day we would use vaccine passports to travel long before the mRNA vaccines even reached clinical trials. I was just another individual, fortunate enough to experience stability during an unstable time, having a home to feel safe in during this scary time. It was only when I started to think deeper about the effects of travel, or rather the lack thereof, that I remembered the children I encountered in Thailand and Myanmar who relied on tourists to earn $1 USD a day from selling handmade bracelets. 1 in 10 jobs are supported by the tourism industry, providing livelihoods for many millions of people in both developing and developed economies. COVID has cost global tourism $1.2 trillion USD and this number will continue to rise the longer people are apprehensive about travelling due to safety concerns. Although this project is far from perfect, it attempts to tackle vaccine passports in a universal manner in hopes of buying us time to mitigate tragic repercussions caused by the pandemic.
## What it does
* You can login with your email, and generate a personalised interface with yours and your family’s (or whoever you’re travelling with’s) vaccine data
* Universally Generated QR Code after the input of information
* To do list prior to travel to increase comfort and organisation
* Travel itinerary and calendar synced onto the app
* Country-specific COVID related information (quarantine measures, mask mandates etc.) all consolidated in one destination
* Tourism section with activities to do in a city
## How we built it
Project was built using Google QR-code APIs and Glideapps.
## Challenges we ran into
I first proposed this idea to my first team, and it was very well received. I was excited for the project, however little did I know, many individuals would leave for a multitude of reasons. This was not the experience I envisioned when I signed up for my first hackathon as I had minimal knowledge of coding and volunteered to work mostly on the pitching aspect of the project. However, flying solo was incredibly rewarding and to visualise the final project containing all the features I wanted gave me lots of satisfaction. The list of challenges is long, ranging from time-crunching to figuring out how QR code APIs work but in the end, I learned an incredible amount with the help of Google.
## Accomplishments that we're proud of
I am proud of the app I produced using Glideapps. Although I was unable to include more intricate features to the app as I had hoped, I believe that the execution was solid and I’m proud of the purpose my application held and conveyed.
## What we learned
I learned that a trio of resilience, working hard and working smart will get you to places you never thought you could reach. Challenging yourself and continuing to put one foot in front of the other during the most adverse times will definitely show you what you’re made of and what you’re capable of achieving. This is definitely the first of many Hackathons I hope to attend and I’m thankful for all the technical as well as soft skills I have acquired from this experience.
## What's next for FlightBAE
Utilising GeoTab or other geographical softwares to create a logistical approach in solving the distribution of Oyxgen in India as well as other pressing and unaddressed bottlenecks that exist within healthcare. I would also love to pursue a tech-related solution regarding vaccine inequity as it is a current reality for too many.
|
## Inspiration
Recently, security has come to the forefront of media with the events surrounding Equifax. We took that fear and distrust and decided to make something to secure and protect data such that only those who should have access to it actually do.
## What it does
Our product encrypts QR codes such that, if scanned by someone who is not authorized to see them, they present an incomprehensible amalgamation of symbols. However, if scanned by someone with proper authority, they reveal the encrypted message inside.
## How we built it
This was built using cloud functions and Firebase as our back end and a Native-react front end. The encryption algorithm was RSA and the QR scanning was open sourced.
## Challenges we ran into
One major challenge we ran into was writing the back end cloud functions. Despite how easy and intuitive Google has tried to make it, it still took a lot of man hours of effort to get it operating the way we wanted it to. Additionally, making react-native compile and run on our computers was a huge challenge as every step of the way it seemed to want to fight us.
## Accomplishments that we're proud of
We're really proud of introducing encryption and security into this previously untapped market. Nobody to our kowledge has tried to encrypt QR codes before and being able to segment the data in this way is sure to change the way we look at QR.
## What we learned
We learned a lot about Firebase. Before this hackathon, only one of us had any experience with Firebase and even that was minimal, however, by the end of this hackathon, all the members had some experience with Firebase and appreciate it a lot more for the technology that it is. A similar story can be said about react-native as that was another piece of technology that only a couple of us really knew how to use. Getting both of these technologies off the ground and making them work together, while not a gargantuan task, was certainly worthy of a project in and of itself let alone rolling cryptography into the mix.
## What's next for SeQR Scanner and Generator
Next, if this gets some traction, is to try and sell this product on th marketplace. Particularly for corporations with, say, QR codes used for labelling boxes in a warehouse, such a technology would be really useful to prevent people from gainng unnecessary and possibly debiliatory information.
|
## Inspiration
We have all lined up the perfect shot only to have our pictures ruined by an innocent passerby. Our team decided that our beautiful pictures should not be compromised by people in the frame.
## What it does
The application uses a recent deep-learning advancement called stable-diffusion to recognize and remove people from pictures, naturally filling in the places in the image where the people were.
## How we built it
Python/Streamlit + Resnet-50 + StableDiffusion on GCP
## Challenges we ran into
* Python Streamlit integration with GCP database
* Storing data using GCS
* Google Cloud Engine Firewall issues to run our Streamlit app
* Domain name rerouting
* Streamlit stateful user control flow logic
## Accomplishments that we're proud of
* GCP Usage
## What we learned
* Streamlit
* Deep learning
* Cloud Computing
## What's next for Empty World
* Selective removal
* Object removal
|
partial
|
we built an interactive web application that allows users to learn about mail in ballots by scrolling using their hand movements!
|
## Inspiration
To set our goal, we were grandly inspired by the Swiss system, which has proven to be one of the most functional democracy in the world. In Switzerland, there is a free mobile application, VoteInfo, which is managed by a governmental institution, but is not linked to any political groups, where infos about votes and democratic events happening at a national, regional and communal scale are explained, vulgarized and promoted. The goal is to provide the population a deep understanding of the current political discussions and therefore to imply everyone in the Swiss political life, where every citizen can vote approximately 3 times a year on national referendum to decide the future of their country. We also thought it would be interesting to expand that idea to enable elected representative, elected staff and media to have a better sense of the needs and desires of a certain population.
Here is a [link](https://www.bfs.admin.ch/bfs/fr/home/statistiques/politique/votations/voteinfo.html) to the swiss application website (in french, german and italian only).
## What it does
We developed a mobile application where anyone over 18 can have an account. After creating their account and entering their information (which will NOT be sold for profit), they will have the ability to navigate through many "causes", on different scales. For example, a McGill student could join the "McGill" group, and see many ideas proposed by member of elected staff, or even by regular students. They could vote for or against those, or they could choose to give visibility to an idea that they believe is important. The elected staff of McGill could then use the data from the votes, plotted in the app in the form of histograms, to see how the McGill community feels about many different subjects. One could also join the "Montreal Nightlife" group. For instance, a non-profit organization with governmental partnerships like [mtl2424](https://www.mtl2424.ca/), which is currently investigating the possibility of extending the alcohol permit fixed to 3 a.m., could therefore get a good understanding of how the Montreal population feels about this idea, by looking on the different opinion depending on the voters' age, their neighbourhood, or even both!
## How we built it
We used Figma for the graphic interface, and Python (using Spyder IDE) for the data analysis and the graph plotting ,with Matplotlib and Numpy libraries.
## Challenges we ran into
We tried to build a dynamic interface where one could easily be able to set graphs and histograms to certain conditions, i.e. age, gender, occupation... However, the implementation of such deep features happened to be too complicated and time-consuming for our level of understanding of software design, therefore, we abandoned that aspect.
Also, as neither of us had any real background in software design, building the app interface was very challenging.
## Accomplishments that we're proud of
We are really proud of the idea in itself, as we really and honestly believe that, especially in small communities like McGill, it could have a real positive impact. We put a lot of effort into building a realistic and useful tool that we, as students and members of different communities, would really like to have access to.
## What we learned
The thing we mainly learned was how to create a mobile app interface. As stipulated before, it was a real challenge, as neither of us had any experience in software development, so we had to learn while creating our interface.
As we were limited in time and knowledge, we also learned how to understand the priorities of our projects and to focus on them in the first place, and only afterward try to add some features.
## What's next for Kairos
The first step would be to implement our application's back-end and link it to the front-end.
In the future, we would really like to create a nice, dynamic and clean UI, to be attractive and easy to use for anyone, of any age, as the main problem with implementing technological tools for democracy is that the seniors are often under-represented.
We would also like to implement a lot of features, like a special registration menu for organizations to create groups, dynamic maps, discussion channels etc...
Probably the largest challenge in the upcoming implementations will be to find a good way to ensure each user has only one account, to prevent pollution in the sampling.
|
## Inspiration
Critical thinking skills are now more important than ever. To access all of the stored knowledge available to us, it is important that we learn to ask better questions.
Einstein is quoted as saying “If I had an hour to answer a question and my life depended on it, I'd spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”
The right question can unlock the right answer, sometimes very quickly, but you must first ask the right question.
## What it does
The software takes a user's question, sends it to IBM Watson's Developer Cloud and returns a classification of the question, which the provides custom coaching for the user on how to improve there question.
## How we built it
Late nights and Red Bull. We explored different machine learning tools, both Ruby gems and tools from other vendors.
## Challenges we ran into
Developing a classifier depends on a good training set, so we had to create one rather hastily from scratch. Then we had to tweak the model until we could get consistent and reliable results.
We spent a lot of time learning different APIs and we went down a couple different rabbit holes that lead nowhere.
## Accomplishments that we're proud of
## What we learned
The importance of a good training set. How to use the Watson API and IBM's Bluemix platform.
## What's next for Ask Better
Continue to refine the algorithm to add different features and classifiers to improve the coaching. We also want to add a question history so that users can see the ways that their questions improve over time.
|
partial
|
# Feta
***40% of all food in the United States goes to waste.*** … but you’re still randomly hungry for literally anything at 11pm. What if there was an app that would let anyone share their free food tips across campuses, boardrooms, and cities?
**Feta** lets you do this, providing a way for socially-verifiable, live information concerning leftover free food to be shared by clubs, shops, restaurants, and individuals to the general community. No longer will people rely on non-transparent, confusing, and out-of-date GroupMes or word-of-mouth for price-free sustenance.
## What it does
Users submit geo-located posts with images describing an instance of free food. ML classifiers automatically tag posts with relevant info. Other users see the most relevant and closest posts both on a live map and a sidebar list, and can collectively decide if they are real or not. Over time, this will create a mutually-beneficial ecosystem based on algorithmical trust and, of course, late-night hunger.
## How we built it
Feta is currently a mobile-first web app using [Next.js](https://nextjs.org/) and [Chakra UI](https://chakra-ui.com/) on the frontend, and [Convex](https://www.convex.dev/) (a serverless app platform) on our backend. We use Auth0 and Google Sign-in to authenticate users without burdensome and unsecure manual registration. The web app is hosted on Netlify.
## Features
* Web notifications to notify users of nearby free food.
* Graphical ratings system to dynamically boost good free food and eliminate fraud on the platform.
* Easy user authentication and account creation with 0Auth.
* Machine-learning classification models increase user ease-of-submission of new food items on the map.
## What's next for Feta
* SMS notifications to facilitate 24/7 connected snacking without having to check Feta regularly.
* More refined AI classification model using real-world collected data.
* Support for organizations (restaurants, food banks etc.) for recurring food events.
|
## Inspiration
Being frugal students, we all wanted to create an app that would tell us what kind of food we could find around us based on a budget that we set. And so that’s exactly what we made!
## What it does
You give us a price that you want to spend and the radius that you are willing to walk or drive to a restaurant; then voila! We give you suggestions based on what you can get in that price in different restaurants by providing all the menu items with price and calculated tax and tips! We keep the user history (the food items they chose) and by doing so we open the door to crowdsourcing massive amounts of user data and as well as the opportunity for machine learning so that we can give better suggestions for the foods that the user likes the most!
But we are not gonna stop here! Our goal is to implement the following in the future for this app:
* We can connect the app to delivery systems to get the food for you!
* Inform you about the food deals, coupons, and discounts near you
## How we built it
### Back-end
We have both an iOS and Android app that authenticates users via Facebook OAuth and stores user eating history in the Firebase database. We also made a REST server that conducts API calls (using Docker, Python and nginx) to amalgamate data from our targeted APIs and refine them for front-end use.
### iOS
Authentication using Facebook's OAuth with Firebase. Create UI using native iOS UI elements. Send API calls to Soheil’s backend server using json via HTTP. Using Google Map SDK to display geo location information. Using firebase to store user data on cloud and capability of updating to multiple devices in real time.
### Android
The android application is implemented with a great deal of material design while utilizing Firebase for OAuth and database purposes. The application utilizes HTTP POST/GET requests to retrieve data from our in-house backend server, uses the Google Maps API and SDK to display nearby restaurant information. The Android application also prompts the user for a rating of the visited stores based on how full they are; our goal was to compile a system that would incentive foodplaces to produce the highest “food per dollar” rating possible.
## Challenges we ran into
### Back-end
* Finding APIs to get menu items is really hard at least for Canada.
* An unknown API kept continuously pinging our server and used up a lot of our bandwith
### iOS
* First time using OAuth and Firebase
* Creating Tutorial page
### Android
* Implementing modern material design with deprecated/legacy Maps APIs and other various legacy code was a challenge
* Designing Firebase schema and generating structure for our API calls was very important
## Accomplishments that we're proud of
**A solid app for both Android and iOS that WORKS!**
### Back-end
* Dedicated server (VPS) on DigitalOcean!
### iOS
* Cool looking iOS animations and real time data update
* Nicely working location features
* Getting latest data from server
## What we learned
### Back-end
* How to use Docker
* How to setup VPS
* How to use nginx
### iOS
* How to use Firebase
* How to OAuth works
### Android
* How to utilize modern Android layouts such as the Coordinator, Appbar, and Collapsible Toolbar Layout
* Learned how to optimize applications when communicating with several different servers at once
## What's next for How Much
* If we get a chance we all wanted to work on it and hopefully publish the app.
* We were thinking to make it open source so everyone can contribute to the app.
|
**Plates** is a web-based platform dedicated to enriching lives by bringing people together through the universal language of food. Our mission is to create meaningful social connections by facilitating shared dining experiences that cater to diverse tastes and preferences. By integrating with the Yelp for restaurant selections and using Stripe for identity verification, we prioritize local dining options to support community businesses and promote healthier lifestyles. With every meal, Plates offers an opportunity to discover, connect, and savor, making it more than just a web app—it's a movement towards a more connected, healthful, and vibrant community, one plate at a time.
## Inspiration
The concept of Plates was inspired by a freshmen year tradition at Stanford, where Resident Assistants organized 'Plates' or platonic dates, pairing people within the dorm to foster friendships over coffee or meals. This experience highlighted the power of food in bringing people together, not for romance, but for genuine connections and shared experiences. In today's app-driven world, where the focus often leans towards romantic connections, we saw an opportunity to fill a gap—creating a space for individuals seeking companionship and community through the joy of dining together.
## What it does
Plates connects individuals looking to explore culinary delights with like-minded dining companions. Leveraging the Yelp, users can discover local restaurants and arrange group meals, ensuring a variety of options to suit all tastes. The integration of Stripe ensures identity verification, allowing for peace of mind when attending events. Whether it's a casual brunch or a gourmet dinner, Plates makes organizing and joining dining experiences effortless, fostering a sense of community with every bite.
## How we built it
Plates was developed as a web application using **React** for the frontend to provide a dynamic and responsive user interface. The backend is powered by **Flask**, offering a lightweight yet powerful framework for server-side operations. User data and event information are managed through a **SQLite** database, chosen for its simplicity and reliability. The integration with **Yelp API** enriches the app with diverse dining options, while **Stripe API** facilitates secure ID verification and transactions, ensuring user safety and trust.
## Challenges we ran into
A significant challenge we faced was ensuring user safety, particularly when events involve personal interactions. To address this, we implemented ID certification through the Stripe API during registration, adding an extra layer of security and verification. Balancing the social aspect of the app with functionality, such as seamless restaurant bookings and event management, also posed challenges, requiring careful design and integration of third-party APIs.
## Accomplishments that we're proud of
We're proud of creating a platform that not only encourages culinary exploration but also emphasizes community building and safety.
## What we learned
Working on Plates has been a transformative experience, deepening our understanding of how technology can facilitate real-world connections and community building. The project taught us the intricacies of crafting engaging user experiences and the importance of thoughtful application design in fostering meaningful social interactions. It highlighted the value of collaboration, bringing together diverse perspectives to overcome challenges and innovate. Through Plates, we've seen firsthand the potential of digital platforms to not only connect people but also to enrich lives through shared culinary adventures, reinforcing our belief in the positive impact technology can have on society.
## What's next for Plates
Moving forward, we plan to enhance Plates with features like event rating systemsand more robust community tools to share experiences and reviews. Expanding our restaurant database and exploring partnerships with local eateries for exclusive dining events are also on our roadmap. Our goal is to make Plates a global community that celebrates the art of dining and the joy of connection, one meal at a time.
|
partial
|
## Inspiration
As a team of backend engineers, we use ChatGPT a LOT in our frontend engineering. However, the process often feels tedious as we flip between our site render, VSCode, and ChatGPT -- especially if our changes aren't rendering properly. Hence, we wanted to make a more convenient, interactive tool that would allow us to simply click on different site components to directly prompt changes.
## What it does
We hope that this tool can help teams with lots of design savvy but limited engineering capacity. We imagine a future in which teams are able to click on site components and prompt Rendr to make desired updates. This interface is no-code and prompt-guided, which is perfect for tedious tasks like formatting grids/tables, and centering divs.
Developers can click/highlight a specific component in their local site preview. From there, a text input will appear to prompt any changes specific to that component. Another button popup gives AI suggestions for possible images (DALL-E generated) and replacement text.
## How we built it
There are three layers to the project:
**1) Backend (Python/Flask)**
* Python script with OpenAI GPT API (using gpt3.5-turbo model) for image generation and chat completion
* Modifies HTML/CSS data by splicing updated GPT-generated code snippets
* Can download updated HTML/CSS to save changes
**2) Live-rendered frontend (HTML/CSS)**
* Rendered inside ReactJS, represents source code
* Modified by the backend script
**3) Interactive Overlay (ReactJS)**
* Layer on top of the actual frontend (IFrame used to display HTML inside ReactJS wrapper)
* Overlays hover functionality on specific HTML elements; This can identify individual components as well as their children elements; Source code snippet can then be sent to backend for changes/suggestions
* Maintains state string of relevant HTML and can stitch together changes
## Challenges we ran into
We had an issue setting up PostgreSQL on our local systems. For some reason, if we had prior installations of it on our laptops, we had lots of issues. We were planning a semantic search of community-uploaded components (using PostgreSQL, Lantern, vector embeddings); however, due to these issues as well as time constraints, we were unable to implement this functionality. However, we hope to add this functionality once we can sort out these bugs.
## Accomplishments that we're proud of
* Being able to isolate which part of the HTML tree to isolate a component and stitching it back into the HTML in the DOM was a huge concern that we eventually conquered
* Only spent 15 cents on OpenAI backend costs
* Slept in the Dome last night
## What we learned
Chatting with the mentors was extremely helpful and serendipitous. We originally planned a *much* more complicated approach for modifying the HTML, but after speaking with a mentor, we were able to come up with a much simpler approach for our demo.
## What's next for Rendr Dev
* Implement community-sharing of components. This can be turned into a marketplace in which users can pre-select popular designs (ex: well-designed buttons, advanced forms with backend functionality, etc.).
* Implement a few-shot Generative AI learning approach for data analysis of components. We currently use GPT to score the "quality" of a component, but we want to expand on this functionality. In the future, we can collect more examples to fine tune the model to have our own custom UI/UX analysis (accessibility, visibility, etc.).
|
## Inspiration
Our team came into the hackathon excited to build a cool full-stack app. However, we quickly realized that most of our time would be eaten up building the frontend, rather than implementing interesting features and functionality. We started working on a command line tool that would help us build frontend components.
We quickly realized that this project was incredibly powerful.
As we continued building and improving the functionality, our command line tool went from a simple helper for components to a powerful engine that could create and import files, import dependencies on its own, handle component creation/URL routing, find and fix its own syntax/compiler errors, and add styling, all with minimal input from the developer.
With Spectral, developers can instantly build an MVP for their ideas, and even fully develop their final React apps. Those with less experience can use Spectral to bring their ideas to life, while having access and control over the bug-free React codebase, unlike most no-code solutions.
## What it does
**Spectral takes your plain English prompts, generates React code, and compiles it. It can install dependencies, create components & files, add styling, and catch its own syntax/compilation errors, all without explicit user intervention.**
-Take plain English prompts, and parse through codebase for context to generate an action plan
-Once user confirms action plan, our custom-built Python engine applies changes to the React codebase
-Our engine is capable of creating components & files, importing files and dependencies, refactoring code into files/components, and styling, all on its own without explicit user instruction
-Our error handling is able to parse through Prettier logs to catch syntax/formatting errors, and even read Vite error logs to discover compiler errors, then fix its own bugs
-Custom .priyanshu files can add context & provide explicit instructions for the code generation to follow (ex. Do all styling with Tailwind. Do not write css unless explicitly told to do so)
-Users can choose to use our web interface for easy access, or command line utility
## How we built it
-Code generation using ChatGPT
-CLI interface and tools to edit React source code developed using Python and argparse
-*Custom functions to parse error logs from Vite, and intelligently determine which code to delete/insert from codebase from chatGPT response*
-Web client developed using Python Flask server
-Web UI built with React, JavaScript, and Vite
## Challenges we ran into
Source code modification was initially very buggy because deleting and inserting lines of code would change the line numbers of the file, and make subsequent insertions/deletions very difficult. We were able to fix this by tweaking our code editing logic, to edit from the bottom of the file first.
Building an accompying web client after we had developed our CLI was also difficult and required refactoring/rewriting logic. We were eventually able to do this using Flask to maintain as much of the original Python code as possible.
## Accomplishments that we're proud of
We are incredibly proud that we were able to build a full code generation system in the span of 24 hours. When we first started, our CLI was very error prone, and although cool, not practical to use. Through extensive error handling and iterative performance improvements, we were able to get it to the point of being able to develop a full frontend mockup without writing any React code ourselves. We are excited that Spectral is now a tool with legitimate utility potential and is something we would even use ourselves to aid project development.
## What we learned
-Learned a lot about working with ports in Flask
-Resolve CORS issues
## What's next for Spectral
-Adding speech-to-text functionality for increased accessibility
-Allowing for larger prompts to be submitted at once with multiple features. Implementing some sort prompt parsing to split large prompts into steps that Spectral can follow more easily.
|
## Inspiration
We are a team of engineering science students with backgrounds in mathematics, physics and computer science. A common passion for the implementation of mathematical methods in innovative computing contexts and the application of these technologies to physical phenomena motivated us to create this project [Parallel Fourier Computing].
## What it does
Our project is a Discrete Fourier Transform [DFT] algorithm implemented in JavaScript for sinusoid spectral decomposition with explicit support for parallel computing task distribution. This algorithm is called by a web page front-end that allows a user to program the frequency/periodicity of a sum of three sinusoids, see this function on a graphical figure, and to calculate and display the resultant DFT for this sinusoid. The program successfully identifies the constituent fundamental frequencies of a sum of three sinusoids by use of this DFT.
## How We built it
This project was built in parallel, with some team members working on DCL integration, web page front ends and algorithm writing. The DFT algorithm used was initially prototyped in Python before being ported over to JavaScript for integration with the DCL network. We tested the function of our algorithm from a wide range of frequencies and sampling rates within the human spectrum of hearing. All team members contributed to component integration towards the end of the project, ensuring compliance with the DCL method of task distribution.
## Challenges We ran into
Though our team has an educational background in Fourier analysis, we were unfamiliar with the workflows and utilities of parallel computing systems. We were principly concerned with (1) how we can fundamentally divide the job of computing a Discrete Fourier Transform into a set of sequentially uncoupled tasks for parallel processing, and (2) how we implement such an algorithm design in the JavaScript foundation that DCL relies on. Initially, our team struggled to define clearly independent computing tasks that we could offload to parallel processing units to speed up our algorithm. We overcame this challenge when we realized that we could produce analytic functions for any partial sum term in our series and pass these exact functions off for processing in parallel. One challenge we faced when adapting our code to the task distribution method of the DCL system was writing a work function that was entirely self-contained without a dependence on external libraries or extraneously long procedural logic. To avoid library dependency, we wrote our own procedural logic to handle the complex number arithmetic that's needed for a Discrete Fourier Transform.
## Accomplishments that We're proud of
Our team successfully wrote a Discrete Fourier Transform algorithm designed for parallel computing uses. We encoded custom complex number arithmetic operations into a self-contained JavaScript function. We have integrated our algorithm with the DCL task scheduler and built a web page front end with interactive controls to program sinusoid functions and to graph these functions and their Discrete Fourier Transforms. Our algorithm can successfully decompose a sum of sinusoids into its constituent frequency components.
## What We learned
Our team learned about some of the constraints that task distribution in a parallel computing network can have on the procedural logic used in task definitions. Not having access to external JavaScript libraries, for example, required custom encoding of complex number arithmetic operations needed to compute DFT terms. Our team also learned more about how DFTs can be used to decompose musical chords into its fundamental pitches.
## What's next for Parallel Fourier Computing
Next steps for our project in the back-end are to optimize the algorithm to decrease the computation time. On the front-end we would like to increase the utility of the application by allowing the user to play a note and have the algorithm determine the pitches used in making the note.
#### Domain.com submission
Our domain name is <http://parallelfouriercomputing.tech/>
#### Team Information
Team 3: Jordan Curnew, Benjamin Beggs, Philip Basaric
|
losing
|
## Inspiration
The phrase, "hey have you listened to this podcast", can be seen as the main source of inspiration for this project.
Andrew Huberman is a neuroscientist and tenured professor at Standford school of medicine. As a part of his desire to bring zero cost to consumer information about health and health-related subjects, he hosts the Huberman-lab podcast. The podcast is posted weekly, covering numerous topics all provided in an easy-to-understand format. With this, listeners are able to implement the systems he recommends with results that are backed by the academic literature.
## What it does
With over 90 episodes, wouldn't it be great to search through the episodes to have a deeper understanding of a topic? What if you are interested in what was said about a particular subject matter but don't have time to watch the whole episode? Better yet, what if you could ask the Dr. himself a question about something he has talked about (which people now can do, at a premium price of $100 a year).
With this in mind, I have created an immersive experience to allow individuals to ask him health-related questions through a model generated based on his podcasts.
## What problem does it solve
It is tough to find trustworthy advice on virtually any topic due to the rise of SEO and people fueled by greed. With this pseudo-chat bot, you can guarantee that the advice provided is peer reviewed or at the very least grounded in truth.
Many people append "Reddit" to their google searches to make sure the results they receive are trustworthy. With this model, the advice it provides is short, and to the point while removing the need to scour the internet for the right answer.
## How I built it
Frontend: NextJS (React) + ChakraUI + Three.js + ReadyPlayer (model and texture generation) + Adobe Mixamo (model animation)
Backend: Flask (Python)
Data sci: Pinecone (vector storage) + Cohere (word embedding, text generation, text summarization), nltk (English language tokenizer)
40 podcast episodes (almost ~35,000 sentences) were processed via the backend and stored on Pinecone to be called via Cohere. Once the backend receives a query request (question/prompt provided by the user):
1. A text prompt is generated based on Cohere's models (breaking down what the user meant)
2. The prompt is fed through their embedding, to embed the newly generated prompt (create something we can search the Pinecone db with)
3. Pinecone is queried, returning the top 10 blocks (sentences). Any block with an accuracy of over 0.45 is fed into the text summarization algorithm
4. Text is compiled, and summarized. The final summarized result is returned to the user.
## Challenges we ran into
tl;dr
* running the proper glb/gltf models while maintaining textures when porting over from fbx (three.js fun)
+ rendering, and animating models properly (more three.js fun)
+ creating a chat interface that I actually like lol
+ communicating with Cohere's API via the provided client and running large batch jobs with Pinecone (time out, performance issues on my laptop)
+ tweaking the parameters on the backend to receive a decent result
## Accomplishments that we're proud of
Firsts:
* creating a word-embedded model, or any AI model for that matter
* using Pinecone
* understanding how three.js works
* solo hack!
* making friends :.)
## What we learned
See above <3
## What's next for Ask Huberman
1. Allow users to generate text of X length or to click a button to learn more
2. Linking the text generated to the actual podcast episodes to watch the whole thing
3. Create a share feature to share useful/funny answers
4. Tweak the model output parameters
5. Embed amazon affiliate links based on what the answer is
## What would you do differently
* cut losses sooner with three.js
* go talk to the Cohere guys a little sooner (they are awesome)
* manage git better?
Huge thank you to the HW team - had a lot of fun.
|
## Inspiration
After learning about NLP and Cohere, we were inspired to explore the capabilities it had and decided to use it for a more medical oriented field. We realized that people prefer the internet to tediously needing to call somebody and wait in long hold times so we designed an alternative to the 811 hotline. We believed that this would not only help those with speech impediments but also aid the health industry with what they want to hire their future employees for.
## What it does
We designed a web application on which the user inputs how they are feeling (as a string), which is then sent onto our web server which contains the Cohere python application, from which we ask for specific data (The most probable illness thought by the NLP model and the percentage certainty it has) to be brought back to the web application as an output.
## How we built it
We built this website itself using HTML, CSS and Javascript. We then imported 100 training examples regarding symptoms for the natural language processing model to learn from, which we then exported as Python code, which was then deployed as a Flask microframework upon DigitalOcean’s Cloud Service Platform so that we could connect it to our website. This sucessfully helped connect our frontend and backend.
## Challenges we ran into
We ran into many challenges as we were all very inexperienced with Flask, Cohere's NLP models, professional web development and Wix (which we tried very hard to work with for the first half of the hackathon). This was because 3 of us were first and second years and half of our team hadn't been to a hackathon before. It was a very stressful 24 hours in which we worked very hard. We were also limited by Cohere's free limit of 100 training examples thus forcing our NLP model to not be as accurate as we wanted it to be.
## Accomplishments that we're proud of
We're very proud of the immense progress we made after giving up upon hosting our website on Wix. Despite losing more than a third of our time, we still managed to not only create a nice web app, we succesfully used Cohere's NLP model, and most notably, we were able to connect our Frontend and Backend using a Flask microframework and a cloud based server. These were all things outside of our confortzone and provided us with many learning opportunities.
## What we learned
We learned a tremendous amount during this hackathon. We became more skilled in flexbox to create a more professional website, we learned how to use flask to connect our python application data with our website domain.
## What's next for TXT811
We believe that the next step is to work on our web development skills to create an even more professional website and train our NLP model to be more accurate in its diagnosis, as well as expand upon what it can diagnose so that it can reach a wider audience of patients. Although we don't believe that it can 100% aid in professional diagnosis as that would be a dangerous concept to imply, it's definetly a very efficient software to point out warning signs to push the general public to reach out before their symptoms could get worse.
|
## Inspiration
Our solution was named on remembrance of Mother Teresa
## What it does
Robotic technology to assist nurses and doctors in medicine delivery, patient handling across the hospital, including ICU’s. We are planning to build an app in Low code/No code that will help the covid patients to scan themselves as the mobile app is integrated with the CT Scanner for doctors to save their time and to prevent human error . Here we have trained with CT scans of Covid and developed a CNN model and integrated it into our application to help the covid patients. The data sets are been collected from the kaggle and tested with an efficient algorithm with the efficiency of around 80% and the doctors can maintain the patient’s record. The beneficiary of the app is PATIENTS.
## How we built it
Bots are potentially referred to as the most promising and advanced form of human-machine interactions. The designed bot can be handled with an app manually through go and cloud technology with e predefined databases of actions and further moves are manually controlled trough the mobile application. Simultaneously, to reduce he workload of doctor, a customized feature is included to process the x-ray image through the app based on Convolution neural networks part of image processing system. CNN are deep learning algorithms that are very powerful for the analysis of images which gives a quick and accurate classification of disease based on the information gained from the digital x-ray images .So to reduce the workload of doctors these features are included. To know get better efficiency on the detection I have used open source kaggle data set.
## Challenges we ran into
The data source for the initial stage can be collected from the kaggle and but during the real time implementation the working model and the mobile application using flutter need datasets that can be collected from the nearby hospitality which was the challenge.
## Accomplishments that we're proud of
* Counselling and Entertainment
* Diagnosing therapy using pose detection
* Regular checkup of vital parameters
* SOS to doctors with live telecast
-Supply off medicines and foods
## What we learned
* CNN
* Machine Learning
* Mobile Application
* Cloud Technology
* Computer Vision
* Pi Cam Interaction
* Flutter for Mobile application
## Implementation
* The bot is designed to have the supply carrier at the top and the motor driver connected with 4 wheels at the bottom.
* The battery will be placed in the middle and a display is placed in the front, which will be used for selecting the options and displaying the therapy exercise.
* The image aside is a miniature prototype with some features
* The bot will be integrated with path planning and this is done with the help mission planner, where we will be configuring the controller and selecting the location as a node
* If an obstacle is present in the path, it will be detected with lidar placed at the top.
* In some scenarios, if need to buy some medicines, the bot is attached with the audio receiver and speaker, so that once the bot reached a certain spot with the mission planning, it says the medicines and it can be placed at the carrier.
* The bot will have a carrier at the top, where the items will be placed.
* This carrier will also have a sub-section.
* So If the bot is carrying food for the patients in the ward, Once it reaches a certain patient, the LED in the section containing the food for particular will be blinked.
|
losing
|
## Inspiration
The current overdose crisis in Philadelphia has lead to a massive amount of overdose victims, and a combination of other crime issues as well as the cities infrastructure has prevented emergency services from being able to respond and assist in a timely manner. However, there is a way for civilians to help with drug overdoses - Narcan. When administered, Narcan can increase an overdose victim's chance of survival significantly, and with pharmacies recently making it over-the-counter, more people have access to Narcan than ever. However, there still exists a major issue: normal civilians have a difficult time knowing when and where an overdose is occurring. Our application would provide them with this information. With this application, we aim to allow civilians and bystanders to contribute to a timely and effect response, saving many lives in the progress.
## What it does
NarCompass allows victims to call for help with a single push of a button if they need it, notifying local hospitals, as well as notifying nearby Narcan carriers. If the carriers choose to help the victim, NarCompass creates a geographical route between the two, showing the most efficient route to the victim, based on the carrier's preferred method of transport. (driving, walking, and bicycling are all supported). Furthermore, the caller is continuously notified throughout the process, ensuring that they know where the help is, and how fast it's coming.
NarCompass also boasts incredible flexibility for the carrier, and allows them to change the range they prefer to respond in. Furthermore, potential callers can choose what data and information appears to the carrier, which is then stored in a cloud database. This optimises the user experience and ensuring that callers can receive the fastest and most capable response at the moment.
## How we built it
Using Flutter and Dart, we created an app that can run across all platforms. Throughout the provided time, we first coded the most integral portion of the application: the map and route creation. Using Google maps API, we generated a map, placed points on the map, and were able to use the Routes feature of the API in order to connect those points, and find the shortest possible path. Then, we needed to ability to store user data, in order to know when a user was overdosing and had notified help, and who was in the area to help. To do this, we used a feature of Google Cloud, Firebase, which allowed us to easily store and pull data from a wide, easily-accessibly, and extremely stable database. Then, we used Twilio in order to create an easy system of alerting the user and the victim when they approached each other, keeping the victim updated as to where their help is. Finally, it was simply a matter of organisation, basic code optimisation, and using Flutter packages (such as settings\_ui) to create an interactive UI, allowing for our users to make the most out of their experience, maximising potential lives saved.
## Challenges we ran into
As we started the competition, we realised that while we were competent, we were severely lacking in experience. Out of the four people on our team, three had never used Flutter or Dart before and not a single one of us had used Twilio or Firebase before. This lack of experience led to us having to spend a lot more time learning the basics of these technologies, limiting the time we had to create our final product. Furthermore, this was the first time the four of us had worked as a team. Before this, all four members of the team had specialised in individual coding, and this marks one of the first big projects in which all four had to work together to prove a single product. Despite these challenges, however, we still managed to produce an end product we're proud of.
## Accomplishments that we're proud of
Everything in the app - the data storage, Google Maps implementation, Firebase implementation, Twilio implementation, and much, much more - are great accomplishments for us. We have never completed a project of this scale or depth as a team, and with this accomplishment comes a deep sense of pride and achievement. That being said, however, the Google Routes implementation serves as a particular spot of pride for us. The methods we used to create the path, as well as discover the shortest path, was a fairly unique approach and functioned extremely well, being incredibly accurate and a core functionality of different parts of our application.
## What we learned
With this Hackathon, we learned not only how to create an app with all of those components listed above, but also how to efficiently work as a team. We learned how to efficiently divide coding work, compartmentalise app creation, and communicate, as well as how to put those all together to create a brilliant final product.
## What's next for NarCompass
First, basic improvements. While the app certainly functions well and efficiently as-is, there is always room to do better. With more research and time we are certain we can create a sleeker, faster, more efficient product. Second, we want to market this product. We believe that this could certainly help people not only in Philadelphia, but worldwide as well, and we want to ensure that this app, in some shape or form, is available for those who need it.
|
## Inspiration
In light of the ongoing global conflict between war-torn countries, many civilians face hardships. Recognizing these challenges, LifeLine Aid was inspired to direct vulnerable groups to essential medical care, health services, shelter, food and water assistance, and other deprivation relief.
## What it does
LifeLine Aid provides multifunctional tools that enable users in developing countries to locate resources and identify dangers nearby. Utilizing the user's location, the app alerts them about the proximity of a situation and centers for help. It also facilitates communication, allowing users to share live videos and chat updates regarding ongoing issues. An upcoming feature will highlight available resources, like nearby medical centers, and notify users if these centers are running low on supplies.
## How we built it
Originally, the web backend was to be built using Django, a trusted framework in the industry. As we progressed, we realized that the amount of effort and feasibility of exploiting Django were not sustainable; as we made no progress within the first day. Quickly turning to the in-depth knowledge of one of our team member’s extensive research into asyncio, we decided to switch to FastAPI, a trusted framework used by Microsoft. Using this framework had both its benefits and costs. Realizing after our first day, Django proved to be a roadblock, thus we ultimately decided to switch to FastAPI.
Our backend proudly uses CockroachDB, an unstoppable force to be reckoned with. CockroachDB allowed our code to scale and continue to serve those who suffer from the effects of war.
## Challenges we ran into
In order to pinpoint hazards and help, we would need to obtain, store, and reverse-engineer Geospatial coordinate points which we would then present to users in a map-centric manner. We initially struggled with converting the Geospatial data from a degree, minutes, seconds format to decimal degrees and storing the converted values as points on the map which were then stored as unique 50 character long SRID values. Luckily, one of our teammates had some experience with processing GeoSpatial data so drafting coordinates on a map wasn’t our biggest hurdle to overcome. Another challenge we faced were certain edge cases in our initial Django backend that resulted in invalid data. Since some outputs would be relevant to our project, we had to make an executive decision to change backend midway through. We decided to go with FastApi. Although FastApi brought its own challenge with processing SQL to usable data, it was our way of overcoming our Jango situation. One last challenge we ran into was our overall source control. A mixture of slow and unbearable WiFi, combined with tedious local git repositories not correctly syncing create some frustrating deadlocks and holdbacks. To combat this downtime, we resort to physically drafting and planning out how each component of our code would work.
## Accomplishments that we're proud of
Three out of the four in our team are attending their first hackathon. The experience of crafting an app and seeing the fruits of our labor is truly rewarding. The opportunity to acquire and apply new tools in our project has been exhilarating. Through this hackathon, our team members were all able to learn different aspects of creating an idea into a scalable application. From designing and learning UI/UX, implementing the React-Native framework, emulating iOS and Android devices to test and program compatibility, and creating communication between the frontend and backend/database.
## What we learned
This challenge aimed to dive into technologies that are used widely in our daily lives. Spearheading the competition with a framework trusted by huge companies such as Meta, Discord and others, we chose to explore the capabilities of React Native. Joining our team are three students who have attended their first hackathon, and the grateful opportunity of being able to explore these technologies have led us to take away a skillset of a lifetime.
With the concept of the application, we researched and discovered that the only best way to represent our data is through the usage of Geospatial Data. CockroachDB’s extensive tooling and support allowed us to investigate the usage of geospatial data extensively; as our backend team traversed the complexity and the sheer awe of the scale of said technology. We are extremely grateful to have this opportunity to network and to use these tools that would be useful in the future.
## What's next for LifeLine Aid
There are a plethora of avenues to further develop the app, which include enhanced verification, rate limiting, and many others. Other options include improved hosting using Azure Kubernetes Services (AKS), and many others. This hackathon project is planned to be maintained further into the future as a project for others who may be new or old in this field to collaborate on.
|
# Get the Flight Out helps you GTFO ASAP
## Inspiration
Constantly stuck in meetings, classes, exams, work, with nowhere to go, we started to think. What if we could just press a button, and in a few hours, go somewhere awesome? It doesn't matter where, as long as its not here. We'd need a plane ticket, a ride to the airport, and someplace to stay. So can I book a ticket?
Every online booking site asks for where to go, but we just want to go. What if we could just set a modest budget, and take advantage of last minute flight and hotel discounts, and have all the details taken care of for us?
## What it does
With a push of a button or the flick of an Apple Watch, we'll find you a hotel at a great location, tickets out of your preferred airport, and an Uber to the airport, and email you the details for reference.
|
partial
|
## Inspiration
I was inspired by the website that we all know and love, CoolMathGames.com. While it seems like there are multiple apps designed to elevate our brain and cognitive skills, I felt like going back to the roots- the origin- of it all.
## What it does
Through engaging mini-games, the player is able to accumulate points and master mathematical subjects with these math-centered challenges.
## How we built it
I was able to build a design of what the app would look like through Figma.
## Challenges we ran into
It was difficult to work by myself with so much to do.
## Accomplishments that we're proud of
I am proud of building the cute otter and learning how to use Figma because I typically use canva.
## What we learned
I learned that I should and that I should find a reliable team beforehand.
## What's next for Math Madness
Once this app goes live, I would hope to increase its range by adding more levels, add more subjects of academics so that it is not only math, and that I can include an explanation tab.
|
## Inspiration
Our team was inspired by the potential applications of AR technology.
## What it does
Monsters are generated with mathematics problems attached to them. When users know the answer to a math problem, they can tap on the associated monster to shoot a ball at them and eliminate them.
## How we built it
We built this project in Unity 3D with the help of EchoAR for asset management.
## Challenges we ran into
At the start of the project, none of us had ever used EchoAR, Unity, or C#. Over the course of the weekend, we worked to learn and effectively apply these three resources for the first time.
## Accomplishments that we're proud of
We were successfully able to create a working AR application with AR objects that move towards the user and can be interacted with.
## What we learned
We learned a new programming language, C#, as well as what AR app development entails.
## What's next for Monster Math
The future of Monster Math will involve improved UI, additional functionality, and a leader board element that employs databases.
|
# Inspiration
When we ride a cycle we to steer the cycle towards the direction we fall, and that's how we balance. But the amazing part is we(or our brain) compute a complex set of calculations to balance ourselves. And that we do it naturally. With some practice it gets better. The calculation in our mind that goes on highly inspired us. At first we thought of creating the cycle, but later after watching "Handle" from Boston Dynamics, we thought of creating the self balancing robot, "Istable". Again one more inspiration was watching modelling of systems mathematically in our control systems labs along with how to tune a PID controlled system.
# What it does
Istable is a self balancing robot, running on two wheels, balances itself and tries to stay vertical all the time. Istable can move in all directions, rotate and can be a good companion for you. It gives you a really good idea of robot dynamics and kinematics. Mathematically model a system. It can aslo be used to teach college students the tuning of PIDs of a closed loop system. It is a widely used control scheme algorithm and also very robust to use algorithm. Along with that From Zeigler Nichols, Cohen Coon to Kappa Tau methods can also be implemented to teach how to obtain the coefficients in real time. Theoretical modelling to real time implementation. Along with that we can use in hospitals as well where medicines are kept at very low temperature and needs absolute sterilization to enter, we can use these robots there to carry out these operations. And upcoming days we can see our future hospitals in a new form.
# How we built it
The mechanical body was built using scrap woods laying around, we first made a plan how the body will look like, we gave it a shape of "I" that's and that's where it gets its name. Made the frame, placed the batteries(2x 3.7v li-ion), the heaviest component, along with that attached two micro metal motors of 600 RPM at 12v, cut two hollow circles from old sandal to make tires(rubber gives a good grip and trust me it is really necessary), used a boost converter to stabilize and step up the voltage from 7.4v to 12v, a motor driver to drive the motors, MPU6050 to measure the inclination angle, and a Bluetooth module to read the PID parameters from a mobile app to fine tune the PID loop. And the brain a microcontroller(lgt8f328p). Next we made a free body diagram, located the center of gravity, it needs to be above the centre of mass, adjusted the weight distribution like that. Next we made a simple mathematical model to represnt the robot, it is used to find the transfer function and represents the system. Later we used that to calculate the impulse and step response of the robot, which is very crucial for tuning the PID parameters, if you are taking a mathematical approach, and we did that here, no hit and trial, only application of engineering. The microcontroller runs a discrete(z domain) PID controller to balance the Istable.
# Challenges we ran into
We were having trouble balancing Istable at first(which is obvious ;) ), and we realized that was due to the placing the gyro, we placed it at the top at first, we corrected that by placing at the bottom, and by that way it naturally eliminated the tangential component of the angle thus improving stabilization greatly. Next fine tuning the PID loop, we do got the initial values of the PID coefficients mathematically, but the fine tuning took a hell lot of effort and that was really challenging.
# Accomplishments we are proud of
Firstly just a simple change of the position of the gyro improved the stabilization and that gave us a high hope, we were loosing confidence before. The mathematical model or the transfer function we found was going correct with real time outputs. We were happy about that. Last but not least, the yay moment was when we tuned the robot way correctly and it balanced for more than 30seconds.
# What we learned
We learned a lot, a lot. From kinematics, mathematical modelling, control algorithms apart from PID - we learned about adaptive control to numerical integration. W learned how to tune PID coefficients in a proper way mathematically, no trial and error method. We learned about digital filtering to filter out noisy data, along with that learned about complementary filters to fuse the data from accelerometer and gyroscope to find accurate angles.
# Whats next for Istable
We will try to add an onboard display to change the coefficients directly over there. Upgrade the algorithm so that it can auto tune the coefficients itself. Add odometry for localized navigation. Also use it as a thing for IoT to serve real time operation with real time updates.
|
losing
|
## Inspiration
Swap was inspired by COVID-19 having an impact on many individuals’ daily routines. Sleep schedules were shifted, more distractions were present due to working from home, and being away from friends and family members was difficult. Our team wanted to create a solution that would help others add excitement to their quarantine routines and also connect them with their friends and family members again.
## What it does
Swap is a mobile application that allows users to swap routines with their friends, family members, or even strangers to try something new! You can input daily activities and photos, add an optional mood tracker, add friends, initiate swaps instantly, pre-schedule swaps, and even randomize swaps.
## How we built it
For this project, we created a working prototype and wrote the backend code on how the swaps would be made. The prototype was created using Figma. For writing the backend code, we used python and applications such as Xcode, MySQL, and PyCharm.
## Challenges we ran into
A challenge we ran into was determining how we would write the backend code for the app and what applications to use. Additionally, we had to use up some time to select all the features we wanted Swap to have.
## Accomplishments that we're proud of
Accomplishments we’re proud of include the overall idea of making an app that swapped routines, our Figma prototype, and the backend coding.
## What we learned
We learned how to use Figma’s wireframing feature to create a working prototype and learned about applications (ex. MySQL) that allowed us to write backend code for our project.
## What's next for Swap
We want to finalize the development of the app and launch it in the app stores!
|
## Inspiration
memes have become a cultural phenomenon and a huge recreation for many young adults including ourselves. for this hackathon, we decided to connect the sociability aspect of the popular site "twitter", and combine it with a methodology of visualizing the activity of memes in various neighborhoods. we hope that through this application, we can create a multicultural collection of memes, and expose these memes, trending from popular cities to a widespread community of memers.
## What it does
NWMeme is a data visualization of memes that are popular in different parts of the world. Entering the application, you are presented with a rich visual of a map with Pepe the frog markers that mark different cities on the map that has dank memes. Pepe markers are sized by their popularity score which is composed of retweets, likes, and replies. Clicking on Pepe markers will bring up an accordion that will display the top 5 memes in that city, pictures of each meme, and information about that meme. We also have a chatbot that is able to reply to simple queries about memes like "memes in Vancouver."
## How we built it
We wanted to base our tech stack with the tools that the sponsors provided. This started from the bottom with CockroachDB as the database that stored all the data about memes that our twitter web crawler scrapes. Our web crawler was in written in python which was Google gave an advanced level talk about. Our backend server was in Node.js which CockroachDB provided a wrapper for hosted on Azure. Calling the backend APIs was a vanilla javascript application which uses mapbox for the Maps API. Alongside the data visualization on the maps, we also have a chatbot application using Microsoft's Bot Framework.
## Challenges we ran into
We had many ideas we wanted to implement, but for the most part we had no idea where to begin. A lot of the challenge came from figuring out how to implement these ideas; for example, finding how to link a chatbot to our map. At the same time, we had to think of ways to scrape the dankest memes from the internet. We ended up choosing twitter as our resource and tried to come up with the hypest hashtags for the project.
A big problem we ran into was that our database completely crashed an hour before the project was due. We had to redeploy our Azure VM and database from scratch.
## Accomplishments that we're proud of
We were proud that we were able to use as many of the sponsor tools as possible instead of the tools that we were comfortable with. We really enjoy the learning experience and that is the biggest accomplishment. Bringing all the pieces together and having a cohesive working application was another accomplishment. It required lots of technical skills, communication, and teamwork and we are proud of what came up.
## What we learned
We learned a lot about different tools and APIs that are available from the sponsors as well as gotten first hand mentoring with working with them. It's been a great technical learning experience. Asides from technical learning, we also learned a lot of communication skills and time boxing. The largest part of our success relied on that we all were working on parallel tasks that did not block one another, and ended up coming together for integration.
## What's next for NWMemes2017Web
We really want to work on improving interactivity for our users. For example, we could have chat for users to discuss meme trends. We also want more data visualization to show trends over time and other statistics. It would also be great to grab memes from different websites to make sure we cover as much of the online meme ecosystem.
|
## Inspiration
One of the major inspirations of this project is when we got to know about the case of our team member who earlier met with a car accident. According to him, he didn't get the proper guidance from the doctors as the different doctors gave him different solutions. We have tons of examples from our society who needs help for their case from the right minds. That's how we figured it what to build for the patients who need help. We thought to create a platform where a panel of doctors can work together for a patient to provide a better solution.
## What it does
RareCare is a web application designed to facilitate the management and research of rare diseases. It provides separate portals for patients, doctors, and researchers, each with specific functionalities:
Patients can view their health data, search for doctors, and book appointments.
Doctors can manage appointments, review patient cases, and conduct video consultations.
Researchers can access publications, collaborate with other researchers, analyze data, and provide insights on case studies.
## How we built it
Next.js as the React framework
TypeScript for type-safe JavaScript
Tailwind CSS for styling
Custom components (like GlassCard) for consistent UI elements
React hooks for state management
Tabs component for organizing different sections within each portal
Some AI tools
## Challenges we ran into
Creating a unified yet role-specific interface for different user types
Implementing secure authentication and role-based access control
Designing an intuitive UI for complex medical and research data
## Accomplishments that we're proud of
Developing a comprehensive platform that serves patients, doctors, and researchers
Creating a visually appealing and modern UI with a consistent design language
Implementing features like video consultations and data analysis tools
## What we learned
How to structure a complex application with multiple user roles
Techniques for creating reusable components in React
Best practices for handling medical data and research information
## What's next for RareCare
Implementing real authentication and database integration
Adding more advanced features like AI-assisted diagnosis
Expanding the data analysis capabilities for researchers
Enhancing the collaboration tools for the research community
Implementing a mobile app version for better accessibility
|
partial
|
## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
|
## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
|
# MyHealth
## List of Definitions
CNT - Carbon nanotube, it is 3D structure made of carbon
Graphite - Carbon 2D structure
ECG (electrocardiogram) - the test for medical which measure any problem in heart
Mobility sensor - the test for movement which measure any movement between two different locations
Sensor - A device which can measure the characteristics of particular tangible things
Arduino - the team is using ARDUINO Uno R3 for prototyping.
BLE - Bluetooth Low Energy.
DMM - can measure quantities such as voltage, current, and resistance. Measured values are shown on a digital display, allowing them to be read easily and directly, even by first-time users.
# Introduction
With the recent surge in demand for fitness tracking devices such as Whoop and Fitbit, approximately one in five Americans has a fitness tracker. The wearable market is projected to grow to $34 billion by 2022, but users are still not satisfied with current products. Our team task was to identify issues and limitations of current designs and improve upon them. We are looking for innovative ways to create a new design for wearable devices that would mitigate the issues of the current models.
## Problem Statement
There were over 722 million personal wearable monitoring devices sold worldwide in 2019. Although current wearable models provide approximations for cardiovascular data, current devices could differ by more than 10% from the actual values [1]. In addition, the average price of wearable devices is $218.00 CAD which makes them inaccessible to users in low-income countries [2]. Also, in 2021, there were over 61 million Fitbit, Apple Watch users who had their data disclosed, thus the current models pose serious security concerns, due to data breaches and leaks [3].
The design team aims to create an accessible fitness tracking device product that can be retrofitted to existing equipment/apparel and ensures user privacy. The design team will focus on only implementing pulse sensing, oxygen saturation, and movement detection. The team will not develop any additional features such as sleep tracking, music, or gps data tracking. Furthermore, the device will not have internet connection and communicate through BLE, which will eliminate the risk for potential data leaks.
## Functions
The functions of a design outline its purpose and desired usages. The primary function is the major intended purpose of the design, whilst secondary functions are the other methods the design could be used for, and they stem from the primary function.
| Functional Basis | Primary Function |
| --- | --- |
| Sense information | Sense User’s Pulse, Detect User’s Motion |
| Access Information | Display Data/Information |
| Connect to Mass | Connect to User/User’s Clothing/User’s Sport Equipment |
Sense information(secondary functions)
* Calculate BPM
* Calculate daily average of BPM
* Sense irregular heart rate (as a safety function)
* Approximate daily ‘active minutes’ based on heart rate and blood pressure
Access Information (secondary functions)
* Display the user's health related data ( heart rate, blood pressure, number of steps,etc.)
* Display different settings/modes (language options, device information,etc.)
Connect to mass (secondary functions)
* Attaches to the user to track biological health data (ex: heart-rate, sweat,etc.)
Attaches to the user to track biological health data (ex: heart-rate, sweat,etc.)
Attaches to user/user’s clothing to detect physical movement
## Objectives
The following is a list of desired qualities of a solution.
* Inexpensive [4]
Should be significantly cheaper than current products due to the limited functionality of the device (refer to the problem statement).
* Durable
Should resist deterioration under: temperatures ranging from -53℃ to 50℃
Should resist a fall impact from a height of 1.22m [5]
* Does not obstruct user’s movements
Should not impede any movements of users.
## Sensor Fabrication
For the sensor fabrication process, we modified recent research of the Institute of Microelectronics in Beijing on strain sensors[8].
First, we prepare a silicone mold which is about 2.7 inches wide and 1.18 inches long, and height of 2mm. Then, before it cured, we put graphite solution on top, to make the material more conductive. Silicon by itself has resistivity of about 0.1 Ω-cm and conductivity of about S/m 1.56×10−3. On the other hand, graphite has resistivity of 2.5×10−6 to 5.0×10−6 Ω•m and conductivity of about 3.3x10^2 S/m.Graphite is a 2D material,so it also does not impede the flexibility of silicone. Lastly, our team was planning to add Carbon nanotube solution and reached out to Raunaq Bagchi, professor Howe's PhD student. Unfortunately, we were not able to obtain CNT due to time constraints, and decided to use silver paste instead. Both CNT and silver paste are 3D structures, and do not help with flexibility. However, applying them increases conductivity of our sensor. Finally, we use copper tape that would serve us as connectors and help measure the resistance.
[](https://postimg.cc/tZx9V2gB)
As part of the iteration process, our team experimented with several mold configurations. We tried to cure the silicon-graphite solution under heat of about 600 ⁰C - 800 ⁰C in order to decrease graphite stains on skin. However, at the end of curing, Silicon-Carbide solution was formed, which has inferior conductance to graphite being on top of silicon. We decided to cure the mold under the normal room conditions.
[](https://postimg.cc/F1wJqBcH)
The potential improvements of the sensor could involve using Digital Micromirror Device, a 3D printer that is a standard for microelectronics configuration. In addition, the ecoflex substrate could be used to increase the flexibility of the device.
[](https://postimg.cc/06dgSjRJ)
## Hardware
The mobility sensor changes resistance due to the sensor's deformation in cross-sectional area A and length change. The heart sensor follows the Electron Diagram Mechanism of Transduction. During the stage of flow in and out of the cells, the electrical polarization raises the electrical signal measured by the heart sensor.
Here is the circuit that our team took from existing research on devices with different electrodes[7].
[](https://postimg.cc/7CtGwsCh)
High pass and low pass amplifiers help us to mitigate the effects of variable impedance of the electrodes, and help to maintain signal integrity. 60Hz Notch filter is used to cut-off 60Hz signal powering every building according to industry standards[6]. The gain in the circuit is used to amplify the output. To simplify the circuit, our team did not implement the diodes before the differential amplifier. The potential issues with the design without the diodes is surge in current that might arise between electrode terminals, resulting in bad sensor measurements.
[](https://postimg.cc/KKH37qDL)
Our team also iterated on different circuit designs.We prototyped a circuit that would consist of a single op amp, and all the digital filters would be implemented using Arduino. Our team used multisim to perform testing of the circuit.
[](https://postimg.cc/gw9fwdf6)
[](https://postimg.cc/Z0qD5CCM)
Our team was not able to obtain the heart sensor on time, so the input was simulated by the following code which simulates sinusoidal wave motion
```
void setup() {
pinMode(10, OUTPUT);
for(int n=0;n<500;n++){
t = (float) n/Fs;
samples[n] = (byte)(127.0 * sin(2 * 3.14 * t) + 127.0);
}
sampling_interval = 1000000/ (F*n);
}
void loop()
{
for(int j= 0;j<500;j++){
analogWrite(10, samples[j]);
delayMicroseconds(sampling_interval);
}
}
```
Potential improvements to the circuit could involve adding more bandpass filters, and experimenting with gain coefficients.
## IOS APP Implementation.
The IOS app is using Swif(4.0) as the main programming language. The app is also using the healthkit library that allows users to synchronize all their data with their Iphone/Apple Watch between several apps. Our application is able to retrieve the following data from HealthKit
* Age
* Sex
* Blood Type
* Weight
* Height
* BMI
[](https://postimg.cc/F1q9Nf99)
The user is able to create a workout(includes only jogging for now), and measure the duration and start time of the workout. After, the user is able to review their heart beat data and also activity index calculated by the sensor. However, the app works on the simulated data and the team did not connect the circuit to the phone due to time constraints.
[](https://postimg.cc/v40nPHtp)
To tackle the data breach concerns, we are only storing data locally or transmitting it directly to Apple Health
The next steps for the application would be adding more variability of the workouts and implementing immediate heart rate and motion monitoring.
## Testing
Our team also developed a testing framework. The following table provides the summary of developed tests
| Functional Basis | Primary Function |
| --- | --- |
| 1. The sensor should not shock the user | Sense User’s Pulse, Detect User’s Motion |
| 2. Mobile application should perform at 60FPS and sore all the data for x time decided by the user | Display Data/Information |
| 3. The sensor should not fall from the user after the exercises are performed | Connect to User/User’s Clothing/User’s Sport Equipment |
1. Procedure:
2. Review sensor circuit and design
3. Assure that our product follows the IEEE standards of biomedical devices[9]
4. Procedure
5. Compile an application on IOS device and run it in debug mode to show FPS.
6. Save the exercise for the desired time and check the Health app to ensure that the data is synchronized with our app.
3.Procedure
7. Perform exercises that would involve different muscle groups and ensure that the sensors stay on the suer
There are several other tests that could be implemented such as: check if sensor irritates the skin due to chemical adhesive, data is not transmitted outside of mobile apps and sensors.
## Conclusion
This project focuses on a fitness tracking device that offers better privacy at a cheaper price compared to the models on the market. The design team focused on creating a device that only measures fitness data, thus ‘extra’ features (such as messaging, listening to music, etc.) are not considered. Only the design solutions that account for all the elements covered in this document will be considered in the future stages. After receiving feedback from the MakeUofT mentors, our team would love to continue the research on this project following the improvements that could be made outlined in each section
## Sources
[1] “The rising popularity of fitness trackers in athletics – The Aragon Outlook.” [https://aragonoutlook.org/2022/02/the-rising-popularity-of-fitness-trackers-in-athletics/#:~:text=Since%20the%20invention%20of%20wearable](https://aragonoutlook.org/2022/02/the-rising-popularity-of-fitness-trackers-in-athletics/#:%7E:text=Since%20the%20invention%20of%20wearable).
[2] M. R. Solomon, “Fashion Or Functionality? Consumers Try To Make Sense Of Wearable Technology,” Forbes. <https://www.forbes.com/sites/michaelrsolomon/2018/06/21/how-will-consumers-make-sense-of-wearable-technology/?sh=244cd5b66e9b>.
[3] C. Brookes-Smith, M. Whelan, and N. Bisal, “A button that tells your boss you’re unhappy: why mental health wearables could be bad news at work,” The Conversation. <https://theconversation.com/a-button-that-tells-your-boss-youre-unhappy-why-mental-health-wearables-could-be-bad-news-at-work-154313>.
[4] “Teardown: Fitbit Flex | Electronics360,” electronics360.globalspec.com. [https://electronics360.globalspec.com/article/3128/teardown-fitbit-flex#:~:text=The%20Fitbit%20Flex%2C%20which%20retails](https://electronics360.globalspec.com/article/3128/teardown-fitbit-flex#:%7E:text=The%20Fitbit%20Flex%2C%20which%20retails).
[5] “What is a Drop Test? | Zebra,” Zebra Technologies. [https://www.zebra.com/us/en/resource-library/faq/mobile-computing/what-is-a-drop-test.html#:~:text=U.S.%20Military%20Standard%2C%20MIL%2DSTD](https://www.zebra.com/us/en/resource-library/faq/mobile-computing/what-is-a-drop-test.html#:%7E:text=U.S.%20Military%20Standard%2C%20MIL%2DSTD).
[6] R. Kher, “Signal Processing Techniques for Removing Noise from ECG Signals,” Journal of
Biomedical Engineering and Research, vol. 1, pp. 1–9, Mar. 2019.
[7] "Ultra-thin Wearables for Real-Time Health
Monitoring" <http://sddec19-20.sd.ece.iastate.edu/docs/Design%20Doc%20v2.pdf>, May 2, 2019.
[8] "A Practical Strain Sensor Based on Ecoflex/Overlapping Graphene/Ecoflex Sandwich Structures for Vocal Fold Vibration and Body Motion Monitoring, " <https://www.frontiersin.org/articles/10.3389/fsens.2021.815209/full>, Jan. 2022.
[9] D. P. Rose, M. E. Ratterman, D. K. Griffin, L. Hou, N. Kelley-Loughnane, R. R. Naik, J. A.
Hagen, I. Papautsky, and J. C. Heikenfeld, “Adhesive RFID Sensor Patch for Monitoring
of Sweat Electrolytes,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 6, pp.
1457–1465, 2015.
|
winning
|
## What is it?
This website is intended to be an interactive and fun way for children and adults to practice recognizing facial expressions.
At least one study found that computer based treatment can significantly help children with high-functioning autism (<https://link.springer.com/article/10.1007%2Fs10803-015-2374-0>).
## Why is it important?
People with autism sometimes have difficulties recognizing facial expressions.
Misinterpretations in facial expression could lead someone to experience social difficulties.
One study found that much of the difficulty came when distinguishing emotional from neutral faces.
In the study adults with autism were significantly more likely to misinterpret happy faces as neutral than neurotypical individuals (<http://journals.sagepub.com/doi/abs/10.1177/1362361314520755>).
As someone who has experience in the field of ABA therapy I feel something like this could replace boring worksheets.
It does not require expensive hardware or software- if a therapist, parent, or client themselves want to use it they can!
My biggest hope is that someone will come across this project and be inspired to create other interactive methods of learning for ABA therapy.
## How I built it
The main component of the project is Microsoft's Emotion API. I wrote the important bits in JavaScript and the pretty stuff in HTML & CSS.
## Challenges I ran into
Getting the webcam to work was the hardest part. Luckily I found webcamjs and it had some pretty good documentation and demos.
## What's next for Social Skills
I want to make Social Skills more like a game that rewards the player. With rewards built in it would be easier for ABA therapists to include it as part of a token system.
|
## Inspiration
We love using document-based databases such as MongoDB. We just wish they were better.
More often than not, they aren't built with serverless in mind, have a terrible developer experience, and are unreliable.
## What it does
ezDB is full full-fledged cloud storage solution, providing a database built on top of CockroachDB and a developer dashboard bundled with a smart AI assistant. The database includes an HTTP-API layer and a Schema system, and the dashboard contains an items explorer and UI for editing tables/databases. As a developer, in one click, you can create your database, in another, your table, and with one line of code, you can begin adding items right away. Stuck? The AI assistant customized to your database's data will help you!
## How we built it
ezDB uses CockroachDB as its "engine", with all documents stored there. Deno KV is used as a temporary cache to keep track of user metadata and document references. The website front-end is written with Typescript and React. The AI assistant is built on top of ChatGPT. GitHub OAuth is used for login.
## Challenges we ran into
The sheer size of the project was quite a bit. ezDB aims to be a full cloud solution, providing not just the database but also all the tooling to explore the data within it, create tables, and a generally seamless UI. The caffeine definitely helped...
## Accomplishments that we're proud of
Not only does our solution work, we think it looks great as well. Design is paramount when creating any product, and the website we created is both slick and informative. It feels easy to use ezDB, which was our goal.
## What we learned
We learned a lot about SQL querying (something we were all a lot less familiar with), performance challenges behind databases, and CockroachDB in general.
## What's next for ezDB
There is a lot of polishing left to do before ezDB can go into production. Security needs to be analyzed, proper error handling must be installed, and further features integrated into the developer dashboard. We aim to do all this and convert ezDB into a real solution for other developers like us.
|
## See our live demo!
**On Rinkeby testnet blockchain (recommended):** <https://rinkeby.kelas.dev>
**On xDAI blockchain (Warning: uses real money):** <https://xdai.kelas.dev>
## Check out our narrative StoryMaps here!
**Greenery in your Community:** <https://arcg.is/1vu448>
**Culture & Diversity in choosing your Home:** <https://arcg.is/DH511>
## Inspiration
BlockFund's mission is to build a platform to empower communities with tools and data. We aim to improve outcomes in **community civic engagement and community sustainability.**
*How we do so, BlockFund:*
1. Democratises community funds through blockchain and voting technology - allowing community members to submit their own project proposals and vote.
2. Highlights the need for community environment sustainability projects through identifying local areas lacking in tree foliage. Importantly, we educate the community through a narrative in an ArcGIS StoryMap. Image processing and deep learning science enables the identification of even smallest tree's foliage. **TeamTreesMini**
3. Aids potential new residents-to-be and migrants in looking for home (and community) that fits their unique culture heritage, beliefs and diversity needs, through outlining demographic breakdowns, religious institutions, and ammenities. Also educating the importance and factors to consider through a narrative in an ArcGIS StoryMap.
**1. Democratises community funds through blockchain and voting technology**
In the US, Homeowner Associations (HOA) are the main medium in which residents members pay community upkeep fees to maintain grounds, master insurance, community utilities, as well as overall community finances. Financial Transparency varies between HOAs, but often they only reflect past fund usage and the choices of a few representative members. We sought a solution that democratises the funding of projects process – allowing residents to contribute and vote for projects that **actually matter** to them. It's easy for community minorities to go unheard, so our voting system helps to account for that. We adjust and increase the voting weight of residents whose vote has not funded a successful project after a few attempts – thus improving the representation of minorities in any community.
**2. Highlights the need for environment sustainability projects #TeamTreesMini**
Additionally, we empower communities to engage in green urban planning. We mimic #TeamTrees on a communal scale. Climate change is an increasingly prevalent topic, and we believe illustrating the dangers in your backyard is an excellent way to encourage local action. Our StoryMap solution maps the green foliage coverage in your neighbourhood. Then, we empower the community in proposing projects on the platform to fund tree planting in each home and in common areas.
**3. Your home, why Cultural Fit and Diversity matters**
After a community profile is made, we also assist new members in choosing a community aligned with their cultural, religious and diversity interests. When one of our members moved to a different and largely skewed racial group neighbourhood, he faced both explicit and subtle racism growing up. Home seekers already take demographics into consideration, and our solution empowers aids home seekers in making a more informed decision from a cultural perspective. It also can support urban planning for community planners. We map diversity index scores, demographic data (generational and race), and the religious institutions and ammenities – aiding new home seekers in choosing their home.
The proverb "Birds of a feather flock together" describes how those of similar taste congregate in groups. However, in our world today, the importance of diversity and exposing oneself to different opinions and people is crucial to thrive in the workforce.
>
> Diversity is having a seat at the table. Inclusion is having a voice. And belonging is having that voice be heard. - Liz Fosslien
>
>
>
BlockFund believes that more than just price or transport convenience – diversity, belonging, and inclusion are key concepts in choosing a place to live.
BlockFund is a decentralised autonomous organisation (DAO), that pools community funds, engage the community, and allow transparent voting for projects.
## How we built it
We built and deployed the Decentralized Autonomous Organisation (DAO) smartcontract on two EVM-based blockchain: Rinkeby (Testnet) and xDAI.
We use AlchemyAPI as a node endpoint for our Rinkeby deployment for better data availability and consistency, while our xDAI deployment uses POA's official community node.
We deployed a React.js frontend for quick delivery of our application, leveraging Axios to asynchronously communicate with external libraries, OpenAPI to provide an intuitive Q&A feature promoting universal proposal comprehension, and Ant.Design/Sal for a modern, sleek, and animated user interface.
We use ethers.js to perform communication blockchain nodes, and it supports two main cryptocurrency wallets:
Burner wallet (our homebrew in-browser wallet made for easy user onboarding)
Metamask (a popular web3-enabled wallet for those who wants better security)
On top of that, our Community Learning Kits are made using ESRI ArcGIS storyboards for highly visual storytelling of geographic data.
Last but not least, we use Hardhat for smartcontract deployment automation.
**Here are some other technologies we used:**
For blockchain:
* Ethereum
* Solidity
* Hardhat
For front-end client:
* React.js (+ Hooks + Router)
* Axios — asynchronous communication with OpenAPI
* OpenAI GPT-3 — intuitive Q&A feature for universal proposal comprehension
* Sal — sleek animations
* Ant.Design — modern user interface system
For mapping:
* ArcGIS WebMap
* ArcGIS StoryMap
* ArcGIS-Rest-API
* Custom Functions
Datasets:
* 2010 US Census Data
* 2018 US Census Data
* Pima AZ Foliage Data
## Challenges we ran into
Our main challenge was in integrating ArcGIS API's in limited timeframe. As it was a new technology for us, we really had to crunch our brainpower.
On top of that, deploying a fully working website for other people to try takes a lot of effort to make sure that all of the integrations are also working beyond localhost.
## Accomplishments that we're proud of
* We have a live website!
* We launched to two different blockchains: xDAI and Rinkeby.
* React state management!
## What we learned
* We learned that working remotely with colleagues from 4 different timezones is challenging.
* Good React state management practices will safe a lot of time.
## What's next for BlockFund
* Explore ways how we can work with local communities to deploy this.
* Run more DAO experiments in smaller scope (family, small neighborhood, etc)
|
losing
|
## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
## Inspiration
After observing different hardware options, the dust sensor was especially outstanding in its versatility and struck us as exotic. Dust-particulates in our breaths are an ever present threat that is too often overlooked and the importance of raising awareness for this issue became apparent. But retaining interest in an elusive topic would require an innovative form of expression, which left us stumped. After much deliberation, we realized that many of us had a subconscious recognition for pets, and their demanding needs. Applying this concept, Pollute-A-Pet reaches a difficult topic with care and concern.
## What it does
Pollute-A-Pet tracks the particulates in a person's breaths and records them in the behavior of adorable online pets. With a variety of pets, your concern may grow seeing the suffering that polluted air causes them, no matter your taste in companions.
## How we built it
Beginning in two groups, a portion of us focused on connecting the dust sensor using Arduino and using python to connect Arduino using Bluetooth to Firebase, and then reading and updating Firebase from our website using javascript. Our other group first created gifs of our companions in Blender and Adobe before creating the website with HTML and data-controlled behaviors, using javascript, that dictated the pets’ actions.
## Challenges we ran into
The Dust-Sensor was a novel experience for us, and the specifications for it were being researched before any work began. Firebase communication also became stubborn throughout development, as javascript was counterintuitive to object-oriented languages most of us were used to. Not only was animating more tedious than expected, transparent gifs are also incredibly difficult to make through Blender. In the final moments, our team also ran into problems uploading our videos, narrowly avoiding disaster.
## Accomplishments that we're proud of
All the animations of the virtual pets we made were hand-drawn over the course of the competition. This was also our first time working with the feather esp32 v2, and we are proud of overcoming the initial difficulties we had with the hardware.
## What we learned
While we had previous experience with Arduino, we had not previously known how to use a feather esp32 v2. We also used skills we had only learned in beginner courses with detailed instructions, so while we may not have “learned” these things during the hackathon, this was the first time we had to do these things in a practical setting.
## What's next for Dustables
When it comes to convincing people to use a product such as this, it must be designed to be both visually appealing and not physically cumbersome. This cannot be said for our prototype for the hardware element of our project, which focused completely on functionality. Making this more user-friendly would be a top priority for team Dustables. We also have improvements to functionality that we could make, such as using Wi-Fi instead of Bluetooth for the sensors, which would allow the user greater freedom in using the device. Finally, more pets and different types of sensors would allow for more comprehensive readings and an enhanced user experience.
|
winning
|
## Inspiration
My friend Pablo used to throw me ball when playing beer pong, he moved away so i replaced him with a much better robot.
## What it does
it tosses you a ping pong ball right when you need, you just have to show it your hand.
## How we built it
With love sweat tears and lots of energy drink.
## Challenges we ran into
getting open cv and arduino to communicate.
## Accomplishments that we're proud of
getting the arduino to communicate with python
## What we learned
open cv
## What's next for P.A.B.L.O (pong assistant beats losers only)
use hand tracking to track the cups and actually play and win the game
|
## Inspiration
Virtually every classroom has a projector, whiteboard, and sticky notes. With OpenCV and Python being more accessible than ever, we wanted to create an augmented reality entertainment platform that any enthusiast could learn from and bring to their own place of learning. StickyAR is just that, with a super simple interface that can anyone can use to produce any tile-based Numpy game. Our first offering is *StickyJump* , a 2D platformer whose layout can be changed on the fly by placement of sticky notes. We want to demystify computer science in the classroom, and letting students come face to face with what's possible is a task we were happy to take on.
## What it does
StickyAR works by using OpenCV's Contour Recognition software to recognize the borders of a projector image and the position of human placed sticky notes. We then use a matrix transformation scheme to ensure that the positioning of the sticky notes align with the projector image so that our character can appear as if he is standing on top of the sticky notes. We then have code for a simple platformer that uses the sticky notes as the platforms our character runs, jumps, and interacts with!
## How we built it
We split our team of four into two sections, one half that works on developing the OpenCV/Data Transfer part of the project and the other half who work on the game side of the project. It was truly a team effort.
## Challenges we ran into
The biggest challenges we ran into were that a lot of our group members are not programmers by major. We also had a major disaster with Git that almost killed half of our project. Luckily we had some very gracious mentors come out and help us get things sorted out! We also first attempted to the game half of the project in unity which ended up being too much of a beast to handle.
## Accomplishments that we're proud of
That we got it done! It was pretty amazing to see the little square pop up on the screen for the first time on top of the spawning block. As we think more deeply about the project, we're also excited about how extensible the platform is for future games and types of computer vision features.
## What we learned
A whole ton about python, OpenCV, and how much we regret spending half our time working with Unity. Python's general inheritance structure came very much in handy, and its networking abilities were key for us when Unity was still on the table. Our decision to switch over completely to Python for both OpenCV and the game engine felt like a loss of a lot of our work at the time, but we're very happy with the end-product.
## What's next for StickyAR
StickyAR was designed to be as extensible as possible, so any future game that has colored tiles as elements can take advantage of the computer vision interface we produced. We've already thought through the next game we want to make - *StickyJam*. It will be a music creation app that sends a line across the screen and produces notes when it strikes the sticky notes, allowing the player to vary their rhythm by placement and color.
|
## Inspiration
One day morning, my roommate and I were getting up late. We are rushing for many stuffs in the morning routine: preparing meals, brushing teeth, making the coffee, etc. However, because of the limited time, we are frustrated that we still forgot many things: we didn’t wear enough clothes as we didn’t check the temperature; I also forgot to bring my resume for an important career fair at that day. After that, we found that many people are usually facing difficulties in the morning: some people have bad mood due to early wake up time; some people want to listen for music or podcast conveniently; some people have trouble estimated travel time to workplace. Therefore, all these difficulties inspired us to create JING, a smart mirror that could address user’s difficulties in the morning.
## What it does
JING is a smart mirror that focus on improving life quality and daily routine. JING included several major functions and toolkits: weather, clock, events, news, habit tracker, sleep & productivity tracker, estimated time to workplace. When user using our products, all those information could be displayed at the bottom of the mirror, helping you better prepare for a fresh new day. To ensure better personalized experience, It incorporates blocks design that provide users flexibility on choosing their own blocks to improve the efficiency. Also, our smart mirror incorporates the Google Cloud pose control technology enabled by camera, allowing interactive functions for user to make simple gesture for switch blocks or unfolding blocks. Also, we made a welcoming page for our magic mirror with selected positive phrases and greeting sentences to help our user become more positive and prepared for their new day.
## How we built it
To first built our JING prototype, we started for building our product by purchasing a 21 inch monitors and an acrylic sheet. We covered the monitor with acrylic sheet to reach the reflective effect that form our first prototype mirror. For the software part, we built from scratch and all frameworks for our smart mirror. We used html to make the User Interface for the mirror with marvelous design. With python and javascript code supporting our html webpage, we adopt all APIs across different server, including but not limited to weather, date, and news. Moreover, in addition to all those features above, we also implemented a real-time motion sensor based on Google Cloud PoseNet to provide a fascinating user interaction with images.
## Challenges we ran into
As a team with little experience on Hackthon, one of our biggest challenges is organizing all the works in to the limited time ranges. Since all of our team members are more preferring working for a long-term projects with several days preparation of something. Therefore, we spent several hours on brainstorming and creating a suitable plan for us to work. Also, for the coding skills, our team members are not that familiar with. Therefore, we also spent a lot of time to learn the new skills by incorporating html, API, pose control, and all the other functions together.
## Accomplishments that we're proud of
The most accomplishment the we're proud of is that we using our hardware and software skills to finally build a real prototype with the basic functions we design. Since we are incorporating different types of technology, and we actually met a lot of difficulties during finishing that. Therefore, we are really proud of finishing the entire prototype and the function for our mirror could be really useful for people to use during the morning time in the bathroom. From idealization to finishing the prototype, we spent much effort to make our idealized product come true.
## What we learned
This Cal Hackthon event actually taught us a lot: first, time management skill is really important for creating a product and ensure the efficiency. Moreover, we definitely learn a lot of coding skills and advanced technological knowledge in this competition by exploring from Google Cloud, Microsoft, facebook, and many other API and resources that companies provided. We also learned about how to using the advanced technology on solving the pressing problems that happened in our life.
## What's next for JING
For the future of JING, the first thing we want to do is to refine our prototype by testing our products and improve the function for our product, making it to become more useful and stable. After that, we are planning to lead a larger project team with more skillful software engineer and product manager, satisfying more customers. Also, we planned to think more about future plan on manufacture the mirror and reduce the cost of the product, making it to be more accessible to traditional family's life and positively influence people's morning and evening life.
|
winning
|
## Inspiration
Inspired from American Sniper
## What it does
Make a guess for where the evil sniper is based on callout hints and snipe the sniper, you have 3 chances
## How I built it
Using Callout animation and some geometry
## Challenges I ran into
1) Lack of knowledge on 3d algorithms to turn the landscape into 3d
2) took a long time to figure out the game concept
## Accomplishments that I'm proud of
Smart Use of call out T resolution image to make the game seem very real and attractive
## What I learned
The correct use of Callouts, drag and drop, loading multiple files in javafx Mediaplayer
## What's next for Snipe the Sniper
Show it to Tangelo and discuss with the expert whether we can make 3d landscape that can be rotated just like
in Google map s
|
## Inspiration
After surfing online, we found some cool videos of 3d tracking with an eye. We thought taking those concepts and bringing it to a mobile game would be a wonderful combination of the two, and bring more physical activity in a game.
## What it does
This game provides a source of endless activity and encourages people to be active with their body/eyes providing a more full health experience. Travelling down a slippery slope, users have a chance to have a good time dealing with obstacles and can enjoying the ride.
## How we built it
This app was built on Unity, a popular 3D software that is used by popular app developers and companies. Some renders were also done on Blender, and open computer vision on Python and Unity were explored.
## Challenges we ran into:
* Learning unity in general was a challenge. A number of strange issues happened: large installs, learning the syntax (e.g. mesh, font assets)
* Source control with Github was more challenging with 3D renderings. The files were more dependent on one another making it difficult for multiple users at a time.
* Computer vision libraries worked well on Python but did not work on Unity
## Accomplishments that we're proud of
* Render/app looks amazing and provides users a good experience
* Really cool idea making the slide and the randomness of the colours
## What we learned
* App development can be a challenge
* Great UI/UX makes for an impressive project
## What's next for Slippery Slope
* Exploring the iOS mobile development is a thing that might be done to the app in the future
|
## Inspiration
After being overwhelmed by the volume of financial educational tools available, we discovered how the majority of products are focused for institutions or expensive. We decided there needs to be an easy approach to learning about stocks in a more casual environment. Interested in the simplicity of Tinder’s yes or no swiping mechanics, we decided to combine the 2 ideas to create Tickr!
## What it does
Tickr is a stock screening tool designed to help beginner retail investors discover their next trade! Using an intuitive yes or no discovery system through swiping mechanics, Tickr the next Tinder for stocks. For a more in depth video demo, see our [original screen recorded demo video!](https://youtu.be/dU6rF8vymKE)
## How we built it
Our team created this web app using a Node and Express back end paired with a React front end. The back end of our project used 3 linked Supabase tables to host authenticated user information, static information about stocks from the New York stock exchange and NASDAQ. We also used the [Finnhub API](https://finnhub.io/) to get real time metrics about the stocks we were showing our users.
## Challenges we ran into
Our biggest challenge was setting the scope into something that our team could complete in a weekend. We hadn't used Node and Express in a long time, so getting comfortable with our stack again took more time than we thought.
We were also completely new to Supabase and decided to try it out because it sounded really interesting. While Supabase turned out to be incredibly useful and userfriendly, the learning curve for it also took a bit more time than we thought.
## Accomplishments that we're proud of
The two accomplishments we are most proud of are our finished UI and successful integration of the Finnhub API. Drawing inspiration from Tinder, we were able to recreate a similar UI/UX design with minimal help from pre-existing libraries. Further, we were able to design our backend to make seamless API calls to fetch relevant data for our application.
## What we learned
During this project we learned a lot about the power of friendship and anime. Some of us learned what a market cap was and how to write a viable business proposal while others learned more about full stack development and how to host a database on Supabase.
Overall it was a very fun project and we're really glad we were able to get our MVP done 😁✌️
## What's next for Tickr
Our next goal for Tickr is to finish off the aggregate news feed function. This would entail a news feed of all stocks swiped on and provide notification. This would help improve our north star metric of time spent on platform and daily active users!
|
losing
|
## Inspiration
We had a chat with the engineers at the Jump booth and brainstormed some ideas, particularly in regards to building an explorer for Wormhole cross-chain events. Additionally, since wormhole has a limit for the amount of value allowed to be bridged in a given time (as an additional protection after the hack earlier this year), we wanted to create an estimator for the likelihood a transaction would succeed. This is important especially for cross-chain arbitrage since transactions that are not sucessfully approved could become irreversibly locked up in wormhole for 24 hours.
## What it does
Our project aims to provide greater insight into underlying Wormhole events and help users decide estimate the likihood of their transaction succeeding. Using a spy relay and the public Guardian APIs, we created a system that displays both a real-time tally of Guardian signatures and an success estimator for transactions that takes in chainid and notional value. The real-time graph displays a rolling-window live tally of the Guardians that have signed the past 1000 VAAs. This way, users can easily visualize network reliability and see that the guardians with little to no recent transactions are probably offline. Additionally, by inputting a notional amount and chain id, users can also estimate if their transaction will be successful based on guardian availablity and single transaction notional amount.
## How we built it
We started by building an MVP for the transaction success estimator and concurrently worked on getting guardian data feeds directly from the gossip network. We ran a spy relay in a Digital Ocean droplet, which we used to provide a live VAA broadcast feed to our backend via RPC. Our backend then provides the frontend, which we built in React, with a REST endpoint to query the latest VAA signature counts on a rolling window basis. The information required for our transaction success estimator was pulled in from the public endpoints from available guardians.
## Challenges we ran into
Cross-chain Wormhole transactions must be signed off by 13 of 19 "guardians" to be considered valid. Thus, we needed information from all 19 guardians to reliably predict whether or not cross-chain transactions will fail. However, only 7 of the 19 guardians provide public APIs, so the rest needed to be collected directly from the Gossip network that the guardians broadcast to. This posed a significant challenge for us, since the current "spy" relay implemtation supports only listening for VAA (Verified Action Approvals) and not the Heartbeat events that we were interested in. We compromised by using only the publically available APIs for the transaction success estimator and supplementing it with live feeds from the VAA spy relay.
## Future
We managed to get something useful(maybe?) out there for everyone using the wormhole bridge. Also, we managed to sneak in another hack overnight and in while we ran into issues with both the lack of documentation on the wormhole + guardian code.
In the future, we plan on adding additional functionality to the network if we see people use the mvp. There were things like mempool monitoring and gossip networks that could be done if there was more time.
|
## Inspiration
With a prior interest in crypto and defi, we were attracted to Uniswap V3's simple yet brilliant automated market maker. The white papers were tantalizing and we had several eureka moments when pouring over them. However, we realized that the concepts were beyond the reach of most casual users who would be interested in using Uniswap. Consequently, we decided to build an algorithm that allowed Uniswap users to take a more hands-on and less theoretical approach, while mitigating risk, to understanding the nuances of the marketplace so they would be better suited to make decisions that aligned with their financial goals.
## What it does
This project is intended to help new Uniswap users understand the novel processes that the financial protocol (Uniswap) operates upon, specifically with regards to its automated market maker. Taking an input of a hypothetical liquidity mining position in a liquidity pool of the user's choice, our predictive model uses past transactions within that liquidity pool to project the performance of the specified liquidity mining position over time - thus allowing Uniswap users to make better informed decisions regarding which liquidity pools and what currencies and what quantities to invest in.
## How we built it
We divided the complete task into four main subproblems: the simulation model and rest of the backend, an intuitive UI with a frontend that emulated Uniswap's, the graphic design, and - most importantly - successfully integrating these three elements together. Each of these tasks took the entirety of the contest window to complete to a degree we were satisfied with given the time constraints.
## Challenges we ran into and accomplishments we're proud of
Connecting all the different libraries, frameworks, and languages we used was by far the biggest and most frequent challenge we faced. This included running Python and NumPy through AWS, calling AWS with React and Node.js, making GraphQL queries to Uniswap V3's API, among many other tasks. Of course, re-implementing many of the key features Uniswap runs on to better our simulation was another major hurdle and took several hours of debugging. We had to return to the drawing board countless times to ensure we were correctly emulating the automated market maker as closely as possible. Another difficult task was making our UI as easy to use as possible for users. Notably, this meant correcting the inputs since there are many constraints for what position a user may actually take in a liquidity pool. Ultimately, in spite of the many technical hurdles, we are proud of what we have accomplished and believe our product is ready to be released pending a few final touches.
## What we learned
Every aspect of this project introduced us to new concepts, or new implementations of concepts we had picked up previously. While we had dealt with similar subtasks in the past, this was our first time building something of this scope from the ground-up.
|
## Inspiration
With the constant evolution of the internet and web technologies, web developers are spread thinner than ever. Accessibility is often overlooked. We wanted to make a tool which would provide an accessibility layer on top of webpages -- providing users with accessibility automatically without developers having to spend painstaking hours on such a thing.
## What it does
The project is a Google Chrome Extension which provides an accessibility layer on top of web pages in two important ways
* Automatically generate and add alt tags to images without them
* Automatically make texts contrast with their backgrounds with a contrast ratio of at least 4.5:1
## How we built it
The project is a Google Chrome Extension with a server-less architecture. We utilized AWS's API Gateway to communicate with our AWS Lambda computational unit. The Lambda function is responsible for generating the alt text for the image. This utilizes Azure's Computer Vision API to do so. Although computationally expensive, our software is **heavily optimized** and implements a 2-layer cache system, with the 1st layer as warm Lambda storage and the 2nd layer as AWS DynamoDB. In this way, once a page's alt texts have been generated and cached once, any users of the page will effectively instantly have access to them. Very little data is passed between the client & API Gateway (only a URL and a dictionary of alt texts), meaning that alt texts will (generally) be available before the images actually load. Thus, it also benefits people with limited internet access. Lastly, the automatic text contrasting is done in the frontend by "hacking" the mathematics of RGB coloring.
We also built a static site [hosted by Amazon S3](https://bad-website.s3.amazonaws.com/index.html) for testing purposes.
## Challenges we ran into
Web accessibility is an enormously difficult problem. There is a reason a tool like this doesn't exist already. The standards aren't clear cut from a programmatic perspective, making automated testing difficult. Specifically, automatically *fixing* the issues is a huge undertaking. Thankfully, with the help of our Microsoft Mentor -- Morgan, we discovered [Accessibility Insights](https://accessibilityinsights.io). We used this to test our extension's components to ensure they were working correctly.
Technical Challenges
* utilizing CORS within a Chrome Extension; it requires a background thread, message sharing with that thread, and explicit permissions to make such requests
* identifying the background color of a generic HTML tag; we ended up traversing the DOM upwards until we could safely identify a background color
## Accomplishments that we're proud of
* Integrating many different complex technologies together including 4 AWS Services, Microsoft's Computer Vision Azure API, and a Google Chrome Extension
* Implementing a 2-layered caching system for optimized usage of our extension
* Tackling an extremely important issue and being able to create a usable product in 24 hours
## What we learned
* By using [Accessibility Insights](https://accessibilityinsights.io), we learned many websites don't comply with the Web Content Accessibility Guidelines (WCAG); this was shocking and motivated us to help make a difference
* A lot of amazing tech tools exist nowadays, but it's still up to us to use them and make an impact
* It's quite motivating and fun to hack Assistive Tech
## What's next for Extend a H.A.N.D.
1. We plan to implement more features in order to make websites automatically more compliant with the WCA Standards.
2. We'd like to build extensions for other browsers.
3. We'd like to make a developer tool so that developers can easily and automatically make their sites accessible in the future.
|
winning
|
## Inspiration
Ever felt a shock by thinking where did your monthly salary or pocket money go by the end of a month? When Did you spend it? Where did you spend all of it? and Why did you spend it? How to save and not make that same mistake again?
There has been endless progress and technical advancements in how we deal with day to day financial dealings be it through Apple Pay, PayPal, or now cryptocurrencies and also financial instruments that are crucial to create one’s wealth through investing in stocks, bonds, etc. But all of these amazing tools are catering to a very small demographic of people. 68% of the world population still stands to be financially illiterate. Most schools do not discuss personal finance in their curriculum. To enable these high end technologies to help reach a larger audience we need to work on the ground level and attack the fundamental blocks around finance in people’s mindset.
We want to use technology to elevate the world's consciousness around their personal finance.
## What it does
Where’s my money, is an app that simply takes in financial jargon and simplifies it for you, giving you a taste of managing your money without affording real losses so that you can make wiser decisions in real life.
It is a financial literacy app that teaches you A-Z about managing and creating wealth in a layman's and gamified manner. You start as a person who earns $1000 dollars monthly, as you complete each module you are hit with a set of questions which makes you ponder about how you can deal with different situations. After completing each module you are rewarded with some bonus money which then can be used in our stock exchange simulator. You complete courses, earn money, and build virtual wealth.
Each quiz captures different data as to how your overview towards finance is. Is it inclining towards savings or more towards spending.
## How we built it
The project was not simple at all, keeping in mind the various components of the app we first started by creating a fundamental architecture to as to how the our app would be functioning - shorturl.at/cdlxE
Then we took it to Figma where we brainstormed and completed design flows for our prototype -
Then we started working on the App-
**Frontend**
* React.
**Backend**
* Authentication: Auth0
* Storing user-data (courses completed by user, info of stocks purchased etc.): Firebase
* Stock Price Changes: Based on Real time prices using a free tier API (Alpha vantage/Polygon
## Challenges we ran into
The time constraint was our biggest challenge. The project was very backend-heavy and it was a big challenge to incorporate all the backend logic.
## What we learned
We researched about the condition of financial literacy in people, which helped us to make a better product. We also learnt about APIs like Alpha Vantage that provide real-time stock data.
## What's next for Where’s my money?
We are looking to complete the backend of the app to make it fully functional. Also looking forward to adding more course modules for more topics like crypto, taxes, insurance, mutual funds etc.
Domain Name: learnfinancewitheaseusing.tech (Learn-finance-with-ease-using-tech)
|
## Inspiration
We wanted to make an app that helped people to be more environmentally conscious. After we thought about it, we realised that most people are not because they are too lazy to worry about recycling, turning off unused lights, or turning off a faucet when it's not in use. We figured that if people saw how much money they lose by being lazy, they might start to change their habits. We took this idea and added a full visualisation aspect of the app to make a complete budgeting app.
## What it does
Our app allows users to log in, and it then retrieves user data to visually represent the most interesting data from that user's financial history, as well as their utilities spending.
## How we built it
We used HTML, CSS, and JavaScript as our front-end, and then used Arduino to get light sensor data, and Nessie to retrieve user financial data.
## Challenges we ran into
To seamlessly integrate our multiple technologies, and to format our graphs in a way that is both informational and visually attractive.
## Accomplishments that we're proud of
We are proud that we have a finished product that does exactly what we wanted it to do and are proud to demo.
## What we learned
We learned about making graphs using JavaScript, as well as using Bootstrap in websites to create pleasing and mobile-friendly interfaces. We also learned about integrating hardware and software into one app.
## What's next for Budge
We want to continue to add more graphs and tables to provide more information about your bank account data, and use AI to make our app give personal recommendations catered to an individual's personal spending.
|
## Inspiration
One. Word. ~Frictionless~
## What it does
journ-o provides a ~frictionless~ journaling experience. Users can hop on to their account, make a journal cataloging their events for the day, then leave in a matter of seconds. Years later, they'll be thankful for their experience using journ-o.
## How we built it
journ-o uses a React frontend paired with a Firebase realtime database backend.
## Challenges we ran into
Our project was so ~frictionless~ we kept slipping out of our chairs.
## Accomplishments that we're proud of
Overall, we're proud of everything we were able to do. We had a solid question structure, a good database structure, the actual webpage looks nice. Like everything y'know? -Elliot
## What we learned
Efficient React webdev. Web-Firebase integration. The power of friendship.
## What's next for journ-o
If you want a smaller list, ask what's *not* next for journ-o.
## Discord
Team 11 - cracker#4700, sspecZ#4351, enzon2001#6123
|
winning
|
## Inspiration
Disasters can strike quickly and without notice. Most people are unprepared for situations such as earthquakes which occur with alarming frequency along the Pacific Rim. When wifi and cell service are unavailable, medical aid, food, water, and shelter struggle to be shared as the community can only communicate and connect in person.
## What it does
In disaster situations, Rebuild allows users to share and receive information about nearby resources and dangers by placing icons on a map. Rebuild uses a mesh network to automatically transfer data between nearby devices, ensuring that users have the most recent information in their area. What makes Rebuild a unique and effective app is that it does not require WIFI to share and receive data.
## How we built it
We built it with Android and the Nearby Connections API, a built-in Android library which manages the
## Challenges we ran into
The main challenges we faced while making this project were updating the device location so that the markers are placed accurately, and establishing a reliable mesh-network connection between the app users. While these features still aren't perfect, after a long night we managed to reach something we are satisfied with.
## Accomplishments that we're proud of
WORKING MESH NETWORK! (If you heard the scream of joy last night I apologize.)
## What we learned
## What's next for Rebuild
|
## Inspiration
The only thing worse than no WiFi is slow WiFi. Many of us have experienced the frustrations of terrible internet connections. We have too, so we set out to create a tool to help users find the best place around to connect.
## What it does
Our app runs in the background (completely quietly) and maps out the WiFi landscape of the world. That information is sent to a central server and combined with location and WiFi data from all users of the app. The server then processes the data and generates heatmaps of WiFi signal strength to send back to the end user. Because of our architecture these heatmaps are real time, updating dynamically as the WiFi strength changes.
## How we built it
We split up the work into three parts: mobile, cloud, and visualization and had each member of our team work on a part. For the mobile component, we quickly built an MVP iOS app that could collect and push data to the server and iteratively improved our locationing methodology. For the cloud, we set up a Firebase Realtime Database (NoSQL) to allow for large amounts of data throughput. For the visualization, we took the points we received and used gaussian kernel density estimation to generate interpretable heatmaps.
## Challenges we ran into
Engineering an algorithm to determine the location of the client was significantly more difficult than expected. Initially, we wanted to use accelerometer data, and use GPS as well to calibrate the data, but excessive noise in the resulting data prevented us from using it effectively and from proceeding with this approach. We ran into even more issues when we used a device with less accurate sensors like an Android phone.
## Accomplishments that we're proud of
We are particularly proud of getting accurate paths travelled from the phones. We initially tried to use double integrator dynamics on top of oriented accelerometer readings, correcting for errors with GPS. However, we quickly realized that without prohibitively expensive filtering, the data from the accelerometer was useless and that GPS did not function well indoors due to the walls affecting the time-of-flight measurements. Instead, we used a built in pedometer framework to estimate distance travelled (this used a lot of advanced on-device signal processing) and combined this with the average heading (calculated using a magnetometer) to get meter-level accurate distances.
## What we learned
Locationing is hard! Especially indoors or over short distances.
Firebase’s realtime database was extremely easy to use and very performant
Distributing the data processing between the server and client is a balance worth playing with
## What's next for Hotspot
Next, we’d like to expand our work on the iOS side and create a sister application for Android (currently in the works). We’d also like to overlay our heatmap on Google maps.
There are also many interesting things you can do with a WiFi heatmap. Given some user settings, we could automatically switch from WiFi to data when the WiFi signal strength is about to get too poor. We could also use this app to find optimal placements for routers. Finally, we could use the application in disaster scenarios to on-the-fly compute areas with internet access still up or produce approximate population heatmaps.
|
## Inspiration
We didn't have any ideas and we were following the UT Austin vs USC football game. Sadly, UT Austin lost, but this was not a common occurrence. We decided to make a page to track the meme that is UT football.
## What it does
You type in the name of the college football team you want to track, and our system tries its best to identify the team you're talking about and the number of days since it last won a game and displays this information to you.
## How we built it
We built it by deploying a Flask app on Heroku. It looks up a list of team names from ESPN and responds to each request by attempting to match the given name with a name it has and scrapes the ESPN website to discover the number of days since the last time the team won a game.
## Challenges we ran into
We were looking for a convenient API to acquire football game data, and it turns out this data is surprisingly scarce and/or costly. Thus, we had to pivot to some sketchy website scraping to get the information that we want. Another challenge we ran into was CSS. Even though there wasn't much CSS that was required, CSS is always a struggle.
## Accomplishments that we're proud of
Finding a way to get around the lack of API problem.
## What we learned
We had never scraped websites before, so we learned a bit about that. CSS is always a humbling learning experience.
## What's next for Last Win
We could definitely improve the name searching functionality by adding more synonyms to schools. We could probably also add the number of days since school A won against school B specifically. Maybe we can also let you choose to see when the team you're looking at last lost.
|
winning
|
Check it out live! [link](http://spyfi.me)
## Inspiration
From Wikipedia:
>
> ### In cryptography, a side-channel attack is any attack based on information gained from the physical implementation of a cryptosystem, rather than brute force or theoretical weaknesses in the algorithms (compare cryptanalysis).
>
>
>
As tech products become more prevalent, it is our duty to stay on top of the latest threats to our privacy. Our team stumbled across a potential side-channel attack while playing around with a neat piece of hardware called the bladeRF, a software defined radio.
We discovered that a number of laptops and phones contain network cards that emit a noticeable amount of radio radiation when downloading content from the internet. Apple products in particular displayed the greatest amount of RF activity. We decided to build on this discovery with an interactive IoT data analysis and visualization platform, that we've dubbed SpyFi.
## What it does
Our shielded bladeRF listens on a 2.4GHz frequency band for sustained spikes in activity. When holding certain vulnerable phones or laptops close by and downloading internet content, radiation from the device's network card is detected and sent to our Linode server where the data is processed and analyzed, and eventually displayed via a Google Polymer web app, which depicts meaningful graphs and visualizations of the captured radiation.
## How we built it
### bladeRF
The bladeRF is a software defined radio that can be tuned to pick up a range of frequencies. We were using a 2.4Ghz frequency to measure RF leaks from internet-enabled devices. The blade outputs data in IQ format, which needs further processing and interpretation to make it meaningful.
### Linode
Linode is responsible for hosting our web platform, including our Google Polymer app and our MongoDB backend. We run intensive analytical calculations on Linode to avoid overhead on the bladeRF.
### Mongo
We used a MongoDB backend, hosted on our Linode server. It receives processed bladeRF IQ data chunks which are then picked up by the frontend for data visualization. Mongo's features and flexibility, namely its capped collections and RESTful API, really tied together our software and hardware.
## Challenges we ran into
We had never used a blade before, and getting to know what it was capable of was a dauntingly complicated yet thrilling task.
## Accomplishments that we're proud of
We created a polished hack that works perfectly (we hope). Also, we managed to incorporate a huge number of technologies into one project, from capturing signals with the bladeRF to data visualizations with Google Polymer.
## What we learned
A healthy bit of math and physics, including transforming IQ data and designing a Faraday cage.
We also gained experience with Polymer, advanced uses for MongoDB, experience with lower level technology and bit manipulation.
## What's next for SpyFi
We plan to apply more advanced statistical techniques (such as those seen in side-channel power analysis) to extrapolate more impactful data, applying components of differential power analysis to see if we can extract sensitive information.
|
## Inspiration
A big part of why I attended PennApps was to learn about how to host an effective Hackathon since I will be organizing HackKean at Kean University in Spring 2017. I really wanted to make something with Hackathons in mind and this actually helps with any big events in general. It's a loss prevention system and something I definitely will be using in the future!
## What it does
It creates codes that can be attached to possessions so if those possessions are lost a simple scan of the code will alert the owner via Email, and text, and if they allow it they can receive a phone call as well.
## How I built it
I used Java the Android Platform for the mobile app, applying XML, SMTP protocols, and handling different network provider channels. It also comes with a companion app that is built with JavaFX, CSS. The server I aimed to use was with AWS and MongoDB but that'll be in the future.
## Challenges I ran into
An encryption algorithm that was secure and applicable on a standard windows app and Android App, and cloud services. Although they all adhere to AES standards there were very different libraries I would use only to find that some crucial piece was completely missing in another platform.
I also ran into challenges making
## Accomplishments that I'm proud of
Despite feeling tired I managed to help my friend with his project and still even made a companion application. I rolled my own encryption and it works fairly well. The app is very responsive and I had some forethought with security.
## What I learned
I learned so much about QRcodes, I have a ton more ideas how to apply them. I also learned about encryption and encryption standards. I learned a lot about MongoDB although It won't be in my presentation edition.
## What's next for Matr
Using the server-base to actually create special one-time codes that could be saved but that a user could assign to different possessions. Possibly hardware-special codes that alert the user when outside a certain proximity.
|
## Inspiration
We wanted to create a webapp that will help people learn American Sign Language.
## What it does
SignLingo starts by giving the user a phrase to sign. Using the user's webcam, gets the input and decides if the user signed the phrase correctly. If the user signed it correctly, goes on to the next phrase. If the user signed the phrase incorrectly, displays the correct signing video of the word.
## How we built it
We started by downloading and preprocessing a word to ASL video dataset.
We used OpenCV to process the frames from the images and compare the user's input video's frames to the actual signing of the word. We used mediapipe to detect the hand movements and tkinter to build the front-end.
## Challenges we ran into
We definitely had a lot of challenges, from downloading compatible packages, incorporating models, and creating a working front-end to display our model.
## Accomplishments that we're proud of
We are so proud that we actually managed to build and submit something. We couldn't build what we had in mind when we started, but we have a working demo which can serve as the first step towards the goal of this project. We had times where we thought we weren't going to be able to submit anything at all, but we pushed through and now are proud that we didn't give up and have a working template.
## What we learned
While working on our project, we learned a lot of things, ranging from ASL grammar to how to incorporate different models to fit our needs.
## What's next for SignLingo
Right now, SignLingo is far away from what we imagined, so the next step would definitely be to take it to the level we first imagined. This will include making our model be able to detect more phrases to a greater accuracy, and improving the design.
|
losing
|
## Inspiration
Over the past year I'd encountered plenty of Spotify related websites that would list your stats, but none of them allowed me to compare my taste with my friends, which I found to be the most fun aspect of music. So, for this project I set out to make a website that would allow users to compare their music tastes with their friends.
## What it does
Syncify will analyze your top tracks and artists and then convert that into a customized image for you to share on social media with your friends.
## How we built it
The main technology is a node.js server that runs the website and interacts with the Spotify API. The Information is then sent to a Python Script which will take your unique spotify information and generate an image personalized to you with the information and a QR Code that further encodes information.
## Challenges we ran into
* Installing Node.JS took too long with various different compatibility issues
* Getting the Spotify API to work was a major challenge because of how the Node.JS didn't work well with it.
* Generating the QR Code as well as manipulating the image to include personalized text required multiple Python Packages, and approaches.
* Putting the site online was incredibly difficult because there were so many compatibility issues and package installation issues, in addition to my inexperience with hosting sites, so I had to learn that completely.
## Accomplishments that we're proud of
Everything I did today was completely new to me and being able to learn the skills I did and not give up despite how tempting it was. Being able to utilize the APIs, and learn NodeJS as well as develop some skills with web hosting felt really impressive because of how much I struggled with them throughout the hackathon.
## What we learned
I learnt a lot about documenting code, how to search for help, what approach to the workflow I should take and of course some of the technical skills.
## What's next for Syncify
I plan on uploading Syncify online so it's available for everyone and finishing the feature of allowing users to determine how compatible their music tastes are, as well as redesigning the shareable image so that the QR Code is less obtrusive to the design.
|
## Inspiration
Did you know that about 35% of Spotify users have trouble finding the song they want? Source? I made it up.
On a real note, we actually came up with idea first, but then we scrapped it because we thought it was impossible. Many hours later of struggling, we thought we were doomed because there were just no other brilliant ideas like that first one we had come up with. Soon, we learned that the word "impossible" isn't in the hackathon vocabulary when we came up with a way to make the idea doable.
## What it does
PlayMood works by allowing users to fetch a Spotify playlist by entering the playlist ID. Then, users get a choice of moods to pick from from happy to sad, exciting to calm. Next, the lyrics of the songs are analyzed line-by-line to come up with a prediction about each song's main mood and how strong it corelates to that mood. Finally, the application sends back to the user the list of these songs with the audio to listen to.
## How we built it
**Frontend:** React.js
**Backend:** Flask, Express.js
**External APIs:** Cohere, Spotify, Genius
## Challenges we ran into
The initial challenge was looking for a way to extract the lyrics from the Spotify playlist. We realized this wasn't possible with the Spotify API. Another challenge was communication and overall planning. When everyone's tired we start doing our own thing. We had one API in Flask and the other in Node.js.
## Accomplishments that we're proud of
The largest accomplishment is actually finishing the project and implementing exactly what we wanted. We're also proud of being able to synergize as a team and connect the pieces together to create a whole and functioning application.
## What we learned
Our main goal for hackathons is always to come out learning something new. Harsh learned how to use the Cohere and Genius API to fetch lyrics from songs and classify the lyrics to predict a mood. Melhem learned how to use Flask for the first time to create the API needed for the lyrics classifications.
## What's next for PlayMood
When building PlayMood, we knew to make things simple while keeping scalability in mind. One improvement for PlayMood could be to increase the amount of moods to choose from for the users. To take this a step further, we could even implement Cohere classification for user messages to give users a much more diverse way to express their mood. A big thing we can improve on is the performance of our Flask API. Searching the lyrics for many songs takes several seconds, causing a large delay in the response time. We are planning to search for a solution that involves less API calls such as storing search results in a database to avoid duplicate searches.
|
## Inspiration
While we were coming up with ideas on what to make, we looked around at each other while sitting in the room and realized that our postures weren't that great. You We knew that it was pretty unhealthy for us to be seated like this for prolonged periods. This inspired us to create a program that could help remind us when our posture is terrible and needs to be adjusted.
## What it does
Our program uses computer vision to analyze our position in front of the camera. Sit Up! takes your position at a specific frame and measures different distances and angles between critical points such as your shoulders, nose, and ears. From there, the program throws all these measurements together into mathematical equations. The program compares the results to a database of thousands of positions to see if yours is good.
## How we built it
We built it using Flask, Javascript, Tensorflow, Sklearn.
## Challenges we ran into
The biggest challenge we faced, is how inefficient and slow it is for us to actually do this. Initially our plan was to use Django for an API that gives us the necessary information but it was slower than anything we’ve seen before, that is when we came up with client side rendering. Doing everything in flask, made this project 10x faster and much more efficient.
## Accomplishments that we're proud of
Implementing client side rendering for an ML model
Getting out of our comfort zone by using flask
Having nearly perfect accuracy with our model
Being able to pivot our tech stack and be so versatile
## What we learned
We learned a lot about flask
We learned a lot about the basis of ANN
We learned more on how to implement computer vision for a use case
## What's next for Sit Up!
Implement a phone app
Calculate the accuracy of our model
Enlarge our data set
Support higher frame rates
|
losing
|
# RouteMaster Project Submission
## Inspiration
Our project was born out of the frustration experienced during course selections when students constantly worry about whether they are on the right track to graduate in their chosen major, specialization, or minor. Navigating the confusing and complex undergraduate course calendar, filled with courses having multiple prerequisites, anti-requisites, specific term offerings, and restricted sequences, can be a daunting task. One of our team members, Shashwat, faced an especially challenging journey as he pursued multiple specializations and minors, struggling to fully comprehend the requirements.
## What it does
RouteMaster is a comprehensive solution that leverages the power of the Cockroachlab database to simplify the course planning process. Initially designed for the Bachelor of Mathematics program, it demonstrates the feasibility of our approach. Here's how it works:
1. **Database Integration**: RouteMaster utilizes the Cockroachlab DB to store math-related courses and their major requirements, hosting a sophisticated algorithm for course planning.
2. **User Input**: Users begin by selecting their desired major/specialization and co-op sequence. While our current implementation is tailored for the Bachelor of Mathematics program, our vision extends to other programs and universities.
3. **Algorithmic Magic**: Our algorithm extracts all the necessary courses for the chosen major, identifies their prerequisites from the extensive course list, and organizes a personalized course path. The system strives for an optimized schedule, often front-loading the workload and lightening it in later years.
## How we built it
We employed a multi-tiered approach to build RouteMaster:
* **Backend**: The Cockroachlab DB serves as the backbone, housing both data and the algorithm, which is crafted using JavaScript and SQL.
* **Frontend**: The user interface is a blend of HTML, CSS, and JavaScript, seamlessly connected to the backend database.
* **Data Collection**: We began by scraping the math undergraduate calendar, extracting vital information like course prerequisites, alternative prerequisites, specific degree requirements, and more.
* **Data Integration**: This scraped data was then uploaded into Cockroach DB, forming the basis for our algorithm's operation.
* **Data Presentation**: The algorithm outputs course plans to a JSON file, which the frontend reads and displays to the user.
## Challenges we ran into
Our journey was marked by several challenges:
1. **Last-minute Pivot**: We initially planned to work with ADHawk technologies but faced uncertainty due to hardware allocation issues. This forced us to pivot and adapt swiftly to our second-choice project, which led to a successful outcome.
2. **Web Scraping Complexities**: Extracting data from the undergraduate calendar posed difficulties, particularly for courses with intricate descriptions. We had to carefully consider the input parameters for our web scraping extension to ensure accurate results.
3. **Algorithmic Puzzles**: Crafting an efficient algorithm to assign courses to students, determine the optimal term, and accommodate various co-op sequences presented significant challenges.
## Accomplishments that we're proud of
We take pride in our achievements:
1. **Tech Learning**: We successfully learned and implemented Cockroach DB for data hosting, expanding our technical skillset.
2. **Web Scraping Mastery**: Our experience with web scraping opened doors to future projects that involve data extraction.
3. **Automated Course Planning**: We developed an algorithm that can automatically populate a student's course calendar, simplifying the academic journey.
## What we learned
Our journey with RouteMaster taught us invaluable lessons:
1. **Database Management**: Setting up and hosting data on an online database and utilizing SQL queries for project integration.
2. **Complex Algorithms**: We gained expertise in designing algorithms that factor in efficiency, duplicates, and various considerations.
## What's next for RouteMaster
The journey doesn't end here. We have exciting plans for RouteMaster's future:
1. **Algorithm Refinement**: Continuously improving the algorithm to achieve better course load distribution per term based on user input.
2. **Expansion**: Extending RouteMaster's functionality to cater to other universities, programs, and diverse academic considerations.
3. **Frontend Enhancement**: Revamping the frontend to align with the Figma mockup, offering an even more user-friendly experience.
|
## Inspiration
As college students, we have always struggled to find an efficient way to track the constant flow of assignments, quizzes, midterms, etc. We decided to make the solution ourselves! The idea came from the fact that professors use an increasingly large number of platforms such as myCourses, Crowdmark, Ed, slack, WebWork, etc. The idea was to create a centralized platform that could integrate all of these such that we only need to consult one platform in order to be fully aware of all of our assignments and exams.
## What it does
Upon launch, Course Hub automatically retrieves and interprets new course information on McGill's websites (*MyCourses* and *Minerva*). This information is then added to the *Course Hub calendar*, which is fully editable. Our app also fetches and displays the student's schedule.
## How we built it
**Course-Hub** was implemented in three steps:
1. We used *web scraping* to retrieve the data from the sites and *Natural Language Processing* to interpret the assignments and their deadlines.
2. *cockroachDB*, *postgreSQL* using *Google Cloud services* is how we stored the student's data.
3. We built the UI using PyQt
## What we learned and challenges we faced
We learned that *web scraping* could be a very useful way to gather information efficiently in order to integrate it in a different software. Unfortunately, it is far from being the optimal method and has a lot of challenges since it relies heavily on the specific formatting. One of the most important thing we learned during this project is the complex integration between different platforms needed to make software. The interaction between different programming languages as well as the interaction between the UI, the back-end and the cloud service can be quite challenging to navigate.
## What's next for Course Hub
Obviously deploy it for mobile! And add the functionality for other schools.
|
## Not All Backs are Packed: An Origin Story (Inspiration)
A backpack is an extremely simple, and yet ubiquitous item. We want to take the backpack into the future without sacrificing the simplicity and functionality.
## The Got Your Back, Pack: **U N P A C K E D** (What's it made of)
GPS Location services,
9000 mAH power battery,
Solar charging,
USB connectivity,
Keypad security lock,
Customizable RBG Led,
Android/iOS Application integration,
## From Backed Up to Back Pack (How we built it)
## The Empire Strikes **Back**(packs) (Challenges we ran into)
We ran into challenges with getting wood to laser cut and bend properly. We found a unique pattern that allowed us to keep our 1/8" wood durable when needed and flexible when not.
Also, making connection of hardware and app with the API was tricky.
## Something to Write **Back** Home To (Accomplishments that we're proud of)
## Packing for Next Time (Lessons Learned)
## To **Pack**-finity, and Beyond! (What's next for "Got Your Back, Pack!")
The next step would be revising the design to be more ergonomic for the user: the back pack is a very clunky and easy to make shape with little curves to hug the user when put on. This, along with streamlining the circuitry and code, would be something to consider.
|
losing
|
## Inspiration
The inspiration behind GenAI stems from a deep empathy for those struggling with emotional challenges. Witnessing the power of technology to foster connections, we envisioned an AI companion capable of providing genuine emotional support.
## What it does
GenAI is your compassionate emotional therapy AI friend. It provides a safe space for users to express their feelings, offering empathetic responses, coping strategies, and emotional support. It understands users' emotions, offering personalized guidance to improve mental well-being.
Additional functions:
**1)** Emotions recognition & control
**2)** Control of the level of lies and ethics
**3)** Speaking partner
**4)** Future optional video chat with the AI-generated person
**5)** Future meeting notetaker
## How we built it
GenAI was meticulously crafted using cutting-edge natural language processing and machine learning algorithms. Extensive research on emotional intelligence and human psychology informed our algorithms. Continuous user feedback played a pivotal role in refining GenAI’s responses, making them truly empathetic and supportive.
## Challenges we ran into
Integrating emotional analysis APIs seamlessly into GenAI was vital for its functionality. We faced difficulties in finding a reliable API that could accurately interpret and respond to users' emotions. After rigorous testing, we successfully integrated an API that met our high standards, ensuring GenAI's emotional intelligence. Training LLMs posed another challenge. We needed GenAI to understand context, tone, and emotion intricately. This required extensive training and fine-tuning of the language models. It demanded significant computational resources and time, but the result was an AI friend that could comprehend and respond to users with empathy and depth. Connecting the front end, developed using React, with the backend, powered by Jupyter Notebook, was a complex task. Ensuring real-time, seamless communication between the two was essential for GenAI's responsiveness. We implemented robust data pipelines and optimized API calls to guarantee swift and accurate exchanges, enabling GenAI to provide instant emotional support.
## Accomplishments that we're proud of
**1) Genuine Empathy:** GenAI delivers authentic emotional support, fostering a sense of connection.
**2) User Impact:** Witnessing positive changes in users’ lives reaffirms the significance of our mission.
**3) Continuous Improvement:** Regular updates and enhancements ensure GenAI remains effective and relevant.
## What we learned
Throughout the journey, we learned the profound impact of artificial intelligence on mental health. Understanding emotions, building a responsive interface, and ensuring user trust were pivotal lessons. The power of compassionate technology became evident as GenAI evolved.
## What's next for GenAI
Our journey doesn't end here. We aim to:
**1) Expand Features:** Introduce new therapeutic modules tailored to diverse user needs.
**2) Global Accessibility:** Translate GenAI into multiple languages, making it accessible worldwide.
**3) Collaborate with Experts:** Partner with psychologists to enhance GenAI's effectiveness.
**4) Research Advancements:** Stay abreast of the latest research to continually improve GenAI’s empathetic capabilities.
GenAI is not just a project; it's a commitment to mental well-being, blending technology and empathy to create a brighter, emotionally healthier future.
|
## 💡 Inspiration
We got inspiration from or back-end developer Minh. He mentioned that he was interested in the idea of an app that helped people record their positive progress and showcase their accomplishments there. This then led to our product/UX designer Jenny to think about what this app would target as a problem and what kind of solution would it offer. From our research, we came to the conclusion quantity over quality social media use resulted in people feeling less accomplished and more anxious. As a solution, we wanted to focus on an app that helps people stay focused on their own goals and accomplishments.
## ⚙ What it does
Our app is a journalling app that has the user enter 2 journal entries a day. One in the morning and one in the evening. During these journal entries, it would ask the user about their mood at the moment, generate am appropriate response based on their mood, and then ask questions that get the user to think about such as gratuity, their plans for the day, and what advice would they give themselves. Our questions follow many of the common journalling practices. The second journal entry then follows a similar format of mood and questions with a different set of questions to finish off the user's day. These help them reflect and look forward to the upcoming future. Our most powerful feature would be the AI that takes data such as emotions and keywords from answers and helps users generate journal summaries across weeks, months, and years. These summaries would then provide actionable steps the user could take to make self-improvements.
## 🔧 How we built it
### Product & UX
* Online research, user interviews, looked at stakeholders, competitors, infinity mapping, and user flows.
* Doing the research allowed our group to have a unified understanding for the app.
### 👩💻 Frontend
* Used React.JS to design the website
* Used Figma for prototyping the website
### 🔚 Backend
* Flask, CockroachDB, and Cohere for ChatAI function.
## 💪 Challenges we ran into
The challenge we ran into was the time limit. For this project, we invested most of our time in understanding the pinpoint in a very sensitive topic such as mental health and psychology. We truly want to identify and solve a meaningful challenge; we had to sacrifice some portions of the project such as front-end code implementation. Some team members were also working with the developers for the first time and it was a good learning experience for everyone to see how different roles come together and how we could improve for next time.
## 🙌 Accomplishments that we're proud of
Jenny, our team designer, did tons of research on problem space such as competitive analysis, research on similar products, and user interviews. We produced a high-fidelity prototype and were able to show the feasibility of the technology we built for this project. (Jenny: I am also very proud of everyone else who had the patience to listen to my views as a designer and be open-minded about what a final solution may look like. I think I'm very proud that we were able to build a good team together although the experience was relatively short over the weekend. I had personally never met the other two team members and the way we were able to have a vision together is something I think we should be proud of.)
## 📚 What we learned
We learned preparing some plans ahead of time next time would make it easier for developers and designers to get started. However, the experience of starting from nothing and making a full project over 2 and a half days was great for learning. We learned a lot about how we think and approach work not only as developers and designer, but as team members.
## 💭 What's next for budEjournal
Next, we would like to test out budEjournal on some real users and make adjustments based on our findings. We would also like to spend more time to build out the front-end.
|
## Inspiration
NFTs or Non-Fungible Tokens are a new form of digital assets stored on blockchains. One particularly popular usage of NFTs is to record ownership of digital art. NFTs offer several advantages over traditional forms of art including:
1. The ledger of record which is a globally distributed database, meaning there is persistent, incorruptible verification of who is the actual owner
2. The art can be transferred electronically and stored digitally, saving storage and maintenance costs while simultaneously providing a memetic vehicle that can be seen by billions of people over the internet
3. Royalties can be programmatically paid out to the artist whenever the NFT is transferred between parties leading to more fair compensation and better funding for the creative industry
These advantages resulted in, [the total value of NFTs reaching $41 billion dollars at the end of 2021](https://markets.businessinsider.com/news/currencies/nft-market-41-billion-nearing-fine-art-market-size-2022-1). Clearly, there is a huge market for NFTs.
However, many people do not know the first thing about creating an NFT and the process can be quite technically complex. Artists often hire developers to help turn their art into NFTs and [businesses have been created merely to help create NFTs](https://synapsereality.io/services/synapse-new-nft-services/).
## What it does
SimpleMint is a web app that allows anyone to create an NFT with a few clicks of a button. All it requires is for the user to upload an image and give the NFT a name. Upon clicking ‘mint now’, an NFT is created with the image stored in IPFS and automatically deposited into the creator's blockchain wallet. The underlying blockchain is [Hedera](https://hedera.com/), which is a carbon negative, enterprise grade blockchain trusted by companies like Google and Boeing.
## How we built it
* React app
* IPFS for storage of uploaded images
* Hedera blockchain to create, mint, and store the NFTs
## Challenges we ran into
* Figuring out how to use the IPFS js-sdk to programatically store & retrieve image files
* Figuring out wallet authentication due to the chrome web store going down for the hashpack app which rendered the one click process to connect wallet useless. Had to check on Hedera’s discord to find an alternative solution
## Accomplishments that we're proud of
* Building a working MVP in a day!
## What we learned
* How IPFS works
* How to build on Hedera with the javascript SDK
## What's next for SimpleMint
We hope that both consumers and creators will be able to conveniently turn their images into NFTs to create art that will last forever and partake in the massive financial upside of this new technology.
|
losing
|
## Realm Inspiration
Our inspiration stemmed from our fascination in the growing fields of AR and virtual worlds, from full-body tracking to 3D-visualization. We were interested in realizing ideas in this space, specifically with sensor detecting movements and seamlessly integrating 3D gestures. We felt that the prime way we could display our interest in this technology and the potential was to communicate using it. This is what led us to create Realm, a technology that allows users to create dynamic, collaborative, presentations with voice commands, image searches and complete body-tracking for the most customizable and interactive presentations. We envision an increased ease in dynamic presentations and limitless collaborative work spaces with improvements to technology like Realm.
## Realm Tech Stack
Web View (AWS SageMaker, S3, Lex, DynamoDB and ReactJS): Realm's stack relies heavily on its reliance on AWS. We begin by receiving images from the frontend and passing it into SageMaker where the images are tagged corresponding to their content. These tags, and the images themselves, are put into an S3 bucket. Amazon Lex is used for dialog flow, where text is parsed and tools, animations or simple images are chosen. The Amazon Lex commands are completed by parsing through the S3 Bucket, selecting the desired image, and storing the image URL with all other on-screen images on to DynamoDB. The list of URLs are posted on an endpoint that Swift calls to render.
AR View ( AR-kit, Swift ): The Realm app renders text, images, slides and SCN animations in pixel perfect AR models that are interactive and are interactive with a physics engine. Some of the models we have included in our demo, are presentation functionality and rain interacting with an umbrella. Swift 3 allows full body tracking and we configure the tools to provide optimal tracking and placement gestures. Users can move objects in their hands, place objects and interact with 3D items to enhance the presentation.
## Applications of Realm:
In the future we hope to see our idea implemented in real workplaces in the future. We see classrooms using Realm to provide students interactive spaces to learn, professional employees a way to create interactive presentations, teams to come together and collaborate as easy as possible and so much more. Extensions include creating more animated/interactive AR features and real-time collaboration methods. We hope to further polish our features for usage in industries such as AR/VR gaming & marketing.
|
## Our Inspiration
We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts.
## What it does
EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable.
## How we built it
We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space.
## Challenges we ran into
Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR!
## Accomplishments that we're proud of
We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life.
|
## Inspiration
At our core, we believe that everyone has the power to make a positive impact on the world and that being charitable is one of the most potent ways to do so. Through this belief, we were inspired to create a website that allows users to find nonprofit organizations that align with their values and make it simple to donate to these organizations. Through this, our team hopes to inspire a culture of giving and empower individuals to make a difference in their own way.
## What it does
GiveBack is a website that helps users find nonprofit organizations that align with their values and makes it easy to donate to these organizations. The website provides a simple, easy-to-use interface for users to find organizations that resonate with them and make a difference in the areas they care about most.
Users can choose an amount they wish to donate and help spread hope with the click of a button. The app also allows users to track their donation history and encourages them to participate in more humane acts with a leaderboard showing the top donors.
## How we built it
We built the GiveBack website using a combination of technologies, including React for the website application, Python, and Fast API for an efficient backend.
## Challenges we ran into
As half of our team had little to no web development experience, we spent the hackathon's initial hours researching and learning as much as we could about React.js and web development. We faced numerous challenges along the way, from setting up our development environment to understanding the intricacies of React.js components and state management.
Despite the challenges, we persevered and worked tirelessly through the night, fueled by the excitement of building something from scratch. Finally, after hours of coding and debugging, we had a working prototype of our React.js, the GiveBack website.
## Accomplishments that we're proud of
We're proud of creating an app that makes it easy for users to find and donate to local charities and non-profits. We're also proud of the clean and logical website design which is simple and easy to use. Two of us are also happy to have created any semblance of a website in the first place as being first years we came to this hackathon with little to no experience with coding a website.
## What we learned
Before the hackathon, some of us had only dabbled in web development and had no experience with React.js or Python. However, through our shared dedication and determination, our team was able to learn and build something truly remarkable.
On the programming side, we learned how to set up a development environment, how to use React.js to create user interfaces and components, and how to use Python to build the back-end logic of our application. We also learned about different libraries and frameworks, such as Chakra UI, which helped us create a polished and efficient final product.
On the nonprofit side, we learned a lot about the nonprofit sector and the different types of organizations that exist. We also learned about the various challenges that charities and non-profits face, and how our app can help address some of these challenges.
Most importantly, we learned the power of teamwork and collaboration. We came into the hackathon as strangers but left as friends and partners, united in our goal to create something meaningful and a shared taste for boba tea.
## What's next for GiveBack
Moving forward, we plan to continue to grow our network of local charities and non-profits to make it easier for users to find organizations that align with any user's values. We also plan to expand our app to new regions and communities. Additionally, we want to build more features into the app and expand on what a user could donate and make a positive impact in their communities.
|
winning
|
## Inspiration
We were tired of playing against AI that was too good. When we tried to improve, we were struck down by beautiful AI moves and thus continually failed to improve our skills.
## What it does
Caruanabot is an AI that plays the second-best move against you. It is useless but useful and funny at the same time.
Fun fact: We named it Caruna Bot after Fabiano Caruana, who is apparently the second-best chess player in the world.
|
## Inspiration
We were very inspired by the chaotic innovations happening all around the world and wanted to create something that's just a little stupid, silly, and fun using components that are way too overspecced for the usecase.
## What it does
It is intended to use electromagnetic properties and an intense amount of power to fuel a magnetic accelerator that will accelerate objects towards some destination. This was then switched to a coil gun model, and then using a targeting system made up of a turret and a TFLite model we aim for stupid objects with the power of AI.
## How we built it
We attempted to create an electromagnet and fiddled around for a while with the Qualcomm HDK
## Challenges we ran into
We severely underestimated how short 24 hours is and a lot of the components intended to be added onto the creation do not function as intended.
## Accomplishments that we're proud of
We mixed a huge aspect of electrophysics with modern day cutting edge engineering to create something unique and fun!
## What we learned
Time is short, and this is definitely a project we want to continue into the future.
## What's next for Magnetic Accelerator
To develop it properly and aim properly in the future (summer 2023)
|
## Inspiration
It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened?
## What it does
Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text.
## How we built it
Communications: WebRTC, WebSockets, HTTPS
We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information.
For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition.
Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization)
## Challenges we ran into
There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience.
## Accomplishments that we're proud of
Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs!
## What we learned
For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend.
## What's next for Rewind
We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
|
partial
|
## Inspiration
Sexual assault survivors are in tremendously difficult situations after being assaulted, having to sacrifice privacy and anonymity to receive basic medical, legal, and emotional support. And understanding how to proceed with one's life after being assaulted is challenging due to how scattered information on resources for these victims is for different communities, whether the victim is on an American college campus, in a foreign country, or any number of other situations. Instead of building a single solution or organizing one set of resources to help sexual assault victims everywhere, we believe a simple, community-driven solution to this problem lies in Echo.
## What it does
Using Blockstack, Echo facilitates anonymized communication among sexual assault victims, legal and medical help, and local authorities to foster a supportive online community for victims. Members of this community can share their stories, advice, and support for each other knowing that they truly own their data and it is anonymous to other users, using Blockstack. Victims may also anonymously report incidents of assault on the platform as they happen, and these reports are shared with local authorities if a particular individual has been reported as an offender on the platform several times by multiple users. This incident data is also used to geographically map where in small communities sexual assault happens, to provide users of the app information on safe walking routes.
## How we built it
A crucial part to feeling safe as a sexual harassment survivor stems from the ability to stay anonymous in interactions with others. Our backend is built with this key foundation in mind. We used Blockstack’s Radiks server to create a decentralized application that would keep all user’s data local to that user. By encrypting their information when storing the data, we ensure user privacy and mitigate all risks to sacrificing user data. The user owns their own data. We integrated Radiks into our Node and Express backend server and used this technology to manage our database for our app.
On the frontend, we wanted to create an experience that was eager to welcome users to a safe community and to share an abundance of information to empower victims to take action. To do this, we built the frontend from React and Redux, and styling with SASS. We use blockstack’s Radiks API to gather anonymous messages in the Support Room feature. We used Twilio’s message forwarding API to ensure that victims could very easily start anonymous conversations with professionals such as healthcare providers, mental health therapists, lawyers, and other administrators who could empower them. We created an admin dashboard for police officials to supervise communities, equipped with Esri’s maps that plot where the sexual assaults happen so they can patrol areas more often. On the other pages, we aggregate online resources and research into an easy guide to provide victims the ability to take action easily. We used Azure in our backend cloud hosting with Blockstack.
## Challenges we ran into
We ran into issues of time, as we had ambitious goals for our multi-functional platform. Generally, we faced the learning curve of using Blockstack’s APIs and integrating that into our application. We also ran into issues with React Router as the Express routes were being overwritten by our frontend routes.
## Accomplishments that we're proud of
We had very little experience developing blockchain apps before and this gave us hands-on experience with a use-case we feel is really important.
## What we learned
We learned about decentralized data apps and the importance of keeping user data private. We learned about blockchain’s application beyond just cryptocurrency.
## What's next for Echo
Our hope is to get feedback from people impacted by sexual assault on how well our app can foster community, and factor this feedback into a next version of the application. We also want to build out shadowbanning, a feature to block abusive content from spammers on the app, using a trust system between users.
|
## 💡Inspiration💡
According to statistics, hate crimes and street violence have exponentially increased and the violence does not end there. Many oppressed groups face physical and emotional racial hostility in the same way. These crimes harm not only the victims but also people who have a similar identity. Aside from racial identities, all genders reported feeling more anxious about exploring the outside environment due to higher crime rates. After witnessing an upsurge in urban violence and fear of the outside world, We developed Walk2gether, an app that addresses the issue of feeling unsafe when venturing out alone and fundamentally alters the way we travel.
## 🏗What it does🏗
It offers a remedy to the stress that comes with walking outside, especially alone. We noticed that incorporating the option of travelling with friends lessens anxiety, and has a function to raise information about local criminal activity to help people make informed travel decisions. It also provides the possibility to adjust settings to warn the user of specific situations and incorporates heat map technology that displays red alert zones in real-time, allowing the user to chart their route comfortably. Its campaign for social change is closely tied with our desire to see more people, particularly women, outside without being concerned about being aware of their surroundings and being burdened by fears.
## 🔥How we built it🔥
How can we make women feel more secure while roaming about their city? How can we bring together student travellers for a safer journey? These questions helped us outline the issues we wanted to address as we moved into the design stage. And then we created a website using HTML/CSS/JS and used Figma as a tool to prepare the prototype. We have Used Auth0 for Multifactor Authentication. CircleCi is used so that we can deploy the website in a smooth and easy to verify pipelining system. AssemblyAi is used for speech transcription and is associated with Twilio for Messaging and Connecting Friends for the journey to destination. Twilio SMS is also used for alerts and notification ratings. We have also used Coil for Membership using Web Based Monitization and also for donation to provide better safety route facilities.
## 🛑 Challenges we ran into🛑
The problem we encountered was the market viability - there are many safety and crime reporting apps on the app store. Many of them, however, were either paid, had poor user interfaces, or did not plan routes based on reported occurrences. Also, The challenging part was coming up with a solution because there were additional features that might have been included, but we only had to pick the handful that was most critical to get started with the product.
Also, Our team began working on the hack a day before the deadline, and we ran into some difficulties while tackling numerous problems. Learning how to work with various technology came with a learning curve. We have ideas for other features that we'd like to include in the future, but we wanted to make sure that what we had was production-ready and had a pleasant user experience first.
## 🏆Accomplishments that we're proud of: 🏆
We gather a solution to this problem and create an app which is very viable and could be widely used by women, college students and any other frequent walkers!
Also, We completed the front-end and backend within the tight deadlines we were given, and we are quite pleased with the final outcome. We are also proud that we learned so many technologies and completed the whole project with just 2 members on the team.
## What we learned
We discovered critical safety trends and pain points that our product may address. Over the last few years, urban centres have seen a significant increase in hate crimes and street violence, and the internet has made individuals feel even more isolated.
## 💭What's next for Walk2gether💭
Some of the features incorporated in the coming days would be addressing detailed crime mapping and offering additional facts to facilitate learning about the crimes happening.
|
Duet's music generation revolutionizes how we approach music therapy. We capture real-time brainwave data using Emotiv EEG technology, translating it into dynamic, personalized soundscapes live. Our platform, backed by machine learning, classifies emotional states and generates adaptive music that evolves with your mind. We are all intrinsically creative, but some—whether language or developmental barriers—struggle to convey it. We’re not just creating music; we’re using the intersection of art, neuroscience, and technology to let your inner mind shine.
## About the project
**Inspiration**
Duet revolutionizes the way children with developmental disabilities—approximately 1 in 10 in the United States—express their creativity through music by harnessing EEG technology to translate brainwaves into personalized musical experiences.
Daniel and Justin have extensive experience teaching music to children, but working with those who have developmental disabilities presents unique challenges:
1. Identifying and adapting resources for non-verbal and special needs students.
2. Integrating music therapy principles into lessons to foster creativity.
3. Encouraging improvisation to facilitate emotional expression.
4. Navigating the complexities of individual accessibility needs.
Unfortunately, many children are left without the tools they need to communicate and express themselves creatively. That's where Duet comes in. By utilizing EEG technology, we aim to transform the way these children interact with music, giving them a voice and a means to share their feelings.
At Duet, we are committed to making music an inclusive experience for all, ensuring that every child—and anyone who struggles to express themselves—has the opportunity to convey their true creative self!
**What it does:**
1. Wear an EEG
2. Experience your brain waves as music! Focus and relaxation levels will change how fast/exciting vs. slow/relaxing the music is.
**How we built it:**
We started off by experimenting with Emotiv’s EEGs — devices that feed a stream of brain wave activity in real time! After trying it out on ourselves, the CalHacks stuffed bear, and the Ariana Grande cutout in the movie theater, we dove into coding. We built the backend in Python, leveraging the Cortex library that allowed us to communicate with the EEGs. For our database, we decided on SingleStore for its low latency, real-time applications, since our goal was to ultimately be able to process and display the brain wave information live on our frontend.
Traditional live music is done procedurally, with rules manually fixed by the developer to decide what to generate. On the other hand, existing AI music generators often generate sounds through diffusion-like models and pre-set prompts. However, we wanted to take a completely new approach — what if we could have an AI be a live “composer”, where it decided based on the previous few seconds of live emotional data, a list of available instruments it can select to “play”, and what it previously generated to compose the next few seconds of music? This way, we could have live AI music generation (which, to our knowledge, does not exist yet). Powered by Google’s Gemini LLM, we crafted a prompt that would do just that — and it turned out to be not too shabby!
To play our AI-generated scores live, we used Sonic Pi, a Ruby-based library that specializes in live music generation (think DJing in code). We fed this and our brain wave data to a frontend built in Next.js to display the brain waves from the EEG and sound spectrum from our audio that highlight the correlation between them.
**Challenges:**
Our biggest challenge was coming up with a way to generate live music with AI. We originally thought it was impossible and that the tech wasn’t “there” yet — we couldn’t find anything online about it, and even spent hours thinking about how to pivot to another idea that we could use our EEGs with.
However, we eventually pushed through and came up with a completely new method of doing live AI music generation that, to our knowledge, doesn’t exist anywhere else! It was most of our first times working with this type of hardware, and we ran into many issues with getting it to connect properly to our computers — but in the end, we got everything to run smoothly, so it was a huge feat for us to make it all work!
**What’s next for Duet?**
Music therapy is on the rise – and Duet aims to harness this momentum by integrating EEG technology to facilitate emotional expression through music. With a projected growth rate of 15.59% in the music therapy sector, our mission is to empower kids and individuals through personalized musical experiences. We plan to implement our programs in schools across the states, providing students with a unique platform to express their emotions creatively. By partnering with EEG companies, we’ll ensure access to the latest technology, enhancing the therapeutic impact of our programs. Duet gives everyone a voice to express emotions and ideas that transcend words, and we are committed to making this future a reality!
**Built with:**
* Emotiv EEG headset
* SingleStore real-time database
* Python
* Google Gemini
* Sonic Pi (Ruby library)
* Next.js
|
partial
|
## Inspiration
The COVID-19 pandemic has changed the way we go about everyday errands and trips. Along with needing to plan around wait times, distance, and reviews for a location we may want to visit, we now also need to consider how many other people will be there and whether its even a safe establishment to visit. *Planwise helps us plan our trips better.*
## What it does
Planwise searches for the places around you that you want to visit and calculates a PlanScore that weighs the Google **reviews**, current **attendance** vs usual attendance, **visits**, and **wait times** so that locations that are rated highly, have few people currently visit them compared to their usual weekly attendance, and have low waiting times are rated highly. A location's PlanScore **changes by the hour** to give users the most up-to-date information about whether they should visit an establishment. Furthermore, PlanWise also **flags** common types of places that are prone to promoting the spread of COVID-19, but still allows users to search for them in case they need to visit them for **essential work**.
## How we built it
We built Planwise as a web app with Python, Flask, and HTML/CSS. We used the Google Places and Populartimes APIs to get and rank places.
## Challenges we ran into
The hardest challenges weren't technical - they had more to do with our *algorithm* and considering the factors of the pandemic. Should we penalize an essential grocery store for being busy? Should we even display results for gyms in counties which have enforced shutdowns on them? Calculating the PlanScore was tough because a lot of places didn't have some of the information needed. We also spent some time considering which factors to weigh more heavily in the score.
## Accomplishments that we are proud of
We're proud of being able to make an application that has actual use in our daily lives. Planwise makes our lives not just easier but **safer**.
## What we learned
We learned a lot about location data and what features are relevant when ranking search results.
## What's next for Planwise
We plan to further develop the web application and start a mobile version soon! We would like to further **localize** advisory flags on search results depending on the county. For example, if a county has strict lockdown, then Planwise should flag more types of places than the average county.
|
## Inspiration
We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students
## What it does
The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid.
## How we built it
React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons
## Challenges we ran into
React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native
## What we learned
New exposure APIs and gained experience on linking tools together
## What's next for Scrappy.io
Improvements to the web scraper, potentially expanding beyond restaurants.
|
As university students, we are no strangers to long hours and sleepless nights. We wanted to create something that could help us and other students like us be more efficient with their studying, so we created Notation. Notation is a tool that utilizes your note-taking habits, class lectures, and sample problems and creates high-quality resources that you can use to increase how effective your studying is. We built our project by using React to create our front-end interface, Django and Python to retrieve and pass data and API information, and OpenAI API with AWS API to utilize AI libraries and algorithms to produce new learning resources. During development, we ran into numerous problems, from setup issues to working with completely new libraries and APIs, and were challenged from start to finish. And yet every issue we were challenged with, we managed to solve. In the end, we managed to create a significant resource that students can use to improve the effectiveness of their studying. Throughout the process, we learnt how to use new libraries and frameworks, we learnt how to plan and design code, but most importantly, we learnt how to solve problems when it seemed like there was no solution and to keep going, problem after problem. We want to continue to add more to Notation. In the future, we hope to implement functionality that highlights similar ideas across topics, and the ability to detect mistakes in homework. For now, we hope you check out our project and find use in it yourself. Thank you for your interest!
Sectionalized:
## Inspiration
As university students, we are no strangers to long hours and sleepless nights. We wanted to create something that could help us and other students like us be more efficient with their studying, so we created Notation.
## What it does
Notation is a tool that utilizes your note-taking habits, class lectures, and sample problems and creates high-quality resources that you can use to increase how effective your studying is.
## How we built it
We built our project by using React to create our front-end interface, Django and Python to retrieve and pass data and API information, and OpenAI API with AWS API to utilize AI libraries and algorithms to produce new learning resources.
## Challenges we ran into
During development, we ran into numerous problems, from setup issues to working with completely new libraries and APIs, and were challenged from start to finish.
## Accomplishments that we're proud of
And yet every issue we were challenged with, we managed to solve. In the end, we managed to create a significant resource that students can use to improve the effectiveness of their studying.
## What we learned
Throughout the process, we learnt how to use new libraries and frameworks, we learnt how to plan and design code, but most importantly, we learnt how to solve problems when it seemed like there was no solution and to keep going, problem after problem.
## What's next for PDF Note Taking
We want to continue to add more to Notation. In the future, we hope to implement functionality that highlights similar ideas across topics, and the ability to detect mistakes in homework. For now, we hope you check out our project and find use in it yourself. Thank you for your interest!
|
winning
|
## Inspiration
The Housing market is currently booming, yet there are many in our society who are homeless, near homeless and are in desperate need of financial assistance for housing. This issue is exacerbated for people from minority backgrounds and ethnicities, who have been discriminated against for generations with various types of biased housing policies and access to finance.
..
## What it does
FairhouseCoin addresses these issues by using anonymization and blockchain technology to bring a fairer, more equitable and more efficient housing market to all.
Using Fairhousecoin, an investor can buy into a mortgage instruments which combines houses from the most expensive and least expensive areas into a single instrument, thus amortizing the risk and making the scheme profitable. This then reduces the burden on potential homebuyers who can then use fairhousecoin tokens to apply for mortgages.
## uniswap video
<https://www.youtube.com/watch?v=R7ZeQBucBik>
## How we built it
FairHouseCoin (FHC) is an ERC20 token which can be traded for Eth on Uniswap. This allows multiple people to have joint possession of a property fractional income sharing. The FHC Eth trading pair was setup on Uniswap, the housing and mortgage data was taken from datasets published by the Fed and Zillow and stored on cockroachdb. The frontend allows ap applicant to apply for a mortgage and transforms this application into an anonymized FHC mortgage.
## Challenges we ran into
Synchronization and integration. Uniswap was new to all of us and we are all located remote to each other,
## Accomplishments that we're proud of
Working prototype on a new technology
## What we learned
There is an absolutely ridiculous amount of bias in the housing industry both at the private and at the policy level
## What's next for FairHouse
Address issues of accessibility to housing markets by introducing mainstream or stablecoin swap pairs
|
## Inspiration
Natural disasters are getting more common. Resource planning is tough for counties, as well as busy people who have jobs to go to while the storm brews miles away. DisAstro Plan was born to help busy families automate their supply list and ping them about storm updates in case they need to buy earlier than expected.
## What it does
## How I built it
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for DisAstro Plan
|
## Inspiration
Learning a new instrument is hard. Inspired by games like Guitar Hero, we wanted to make a fun, interactive music experience but also have it translate to actually learning a new instrument. We chose the violin because most of our team members had never touched a violin prior to this hackathon. Learning the violin is also particularly difficult because there are no frets, such as those on a guitar, to help guide finger placement.
## What it does
Fretless is a modular attachment that can be placed onto any instrument. Users can upload any MIDI file through our GUI. The file is converted to music numbers and sent to the Arduino, which then lights up LEDs at locations corresponding to where the user needs to press down on the string.
## How we built it
Fretless is composed to software and hardware components.
We used a python MIDI library to convert MIDI files into music numbers readable by the Arduino. Then, we wrote an Arduino script to match the music numbers to the corresponding light. Because we were limited by the space on the violin board, we could not put four rows of LEDs (one for each string). Thus, we implemented logic to color code the lights to indicate which string to press.
## Challenges we ran into
One of the challenges we faced is that only one member on our team knew how to play the violin. Thus, the rest of the team was essentially learning how to play the violin and coding the functionalities and configuring the electronics of Fretless at the same time.
Another challenge we ran into was the lack of hardware available. In particular, we weren’t able to check out as many LEDs as we needed. We also needed some components, like a female DC power adapter, that were not present at the hardware booth. And so, we had limited resources and had to make do with what we had.
## Accomplishments that we're proud of
We’re really happy that we were able to create a working prototype together as a team. Some of the members on the team are also really proud of the fact that they are now able to play Ode to Joy on the violin!
## What we learned
Do not crimp lights too hard.
Things are always harder than they seem to be.
Ode to Joy on the violin :)
## What's next for Fretless
We can make the LEDs smaller and less intrusive on the violin, ideally a LED pad that covers the entire fingerboard. Also, we would like to expand the software to include more instruments, such as cello, bass, guitar, and pipa. Finally, we would like to corporate a PDF sheet music to MIDI file converter so that people can learn to play a wider range of songs.
|
losing
|
## Inspiration
As a lazy engineering student I hate getting up to pull the chain to open my blinds every morning. As a result of quarantine I go days without seeing sunlight. Lack of sunlight exposure has been proven to have effects on mental health and I wanted to use hardware to solve my problem.
## What it does
The IoT smart blind system automatically opens and closes the blinds using your voice or smart device.
## How we built it
I used an ESP32 as the brains as it is compatible with the esphome library. I added a L289N motor controller to manage an encoded motor. I created a 3D printed enclosure and custom 3D printed gears for the motor.
## Challenges we ran into
The ESP32 is a very finicky board that required many hours of debugging to get working. Figuring out a feedback loop system similar to PID for the motor encoder and the motor driver is a huge challenge. Also figuring out what gear size and ratio to 3D print that fits my current blinds was a tedious trial and error process.
## Accomplishments that we're proud of
I am proud that I was able to create a working smart home blind system using an existing open source library in a way that was not done before.
## What we learned
This is my first every hardware hackathon and I learned how motors, motor encoders, microcontrollers, PWM, and PID work. I also learned how to solder!
## What's next for IoT Smart Home Blinds
The next step for my hack is to change jumper wires to substantial AWG wires and package up my project in a more robust manner.
|
## Inspiration
As university students, we stare at screens 24/7. This exposes us to many habits unhealthy for the eyes, such as reading something too close for too long, or staring at something too bright for too long. We built the smart glasses to encourage healthy habits that reduce eye strain.
## What it does
In terms of measurements, it uses a distance sensor to compare your reading distance against a threshold, and a light sensor to compare your light exposure levels against another threshold. A buzzer buzzes when the thresholds are crossed. That prompts the user to read a warning on the web app, which informs the user what threshold has been crossed.
## How we built
In terms of hardware, we used the HC-SR04 ultrasonic sensor as the distance sensor, a photoresistor for the light sensor. They are connected to an Arduino, and their data is displayed to the Arduino's seriala terminal. For the web app, we used Javascript to read from the serial terminal and compare the data against the thresholds for healthy reading distance and light exposure levels.
## Challenges we ran into
In terms of hardware, we intended to send the sensor data to our laptops via the HC-05 Bluetooth module, but we're unable to establish a connection. For the web app, there are security protocols surrounding the use of serial terminals. We also intended to make an extension that displays warnings for the user, but many capabilities for extensions were discontinued, so we weren't able to use that.
## Accomplishments that we're proud of
We overcame the security protocols when making the Javascript read from the serial port. We also were able to build a fully functional product.
## What we learned
* Content Security Protocol (CSP) modifications
* Capabilities and limitations of extensions
* How to use Python to capture Analog Sensor Data from an Arduino Uno
## What's next for iGlasses
We can integrate computer vision to identify the object we are measuring the distance of.
We can integrate some form of wireless connection to send data from the glasses to our laptops.
We can implement the warning feature on a mobile app. In the app, we display exposure data similar to the Screen Time feature on phones.
We can sense other data useful for determining eye health.
|
## Inspiration
Ever felt a shock by thinking where did your monthly salary or pocket money go by the end of a month? When Did you spend it? Where did you spend all of it? and Why did you spend it? How to save and not make that same mistake again?
There has been endless progress and technical advancements in how we deal with day to day financial dealings be it through Apple Pay, PayPal, or now cryptocurrencies and also financial instruments that are crucial to create one’s wealth through investing in stocks, bonds, etc. But all of these amazing tools are catering to a very small demographic of people. 68% of the world population still stands to be financially illiterate. Most schools do not discuss personal finance in their curriculum. To enable these high end technologies to help reach a larger audience we need to work on the ground level and attack the fundamental blocks around finance in people’s mindset.
We want to use technology to elevate the world's consciousness around their personal finance.
## What it does
Where’s my money, is an app that simply takes in financial jargon and simplifies it for you, giving you a taste of managing your money without affording real losses so that you can make wiser decisions in real life.
It is a financial literacy app that teaches you A-Z about managing and creating wealth in a layman's and gamified manner. You start as a person who earns $1000 dollars monthly, as you complete each module you are hit with a set of questions which makes you ponder about how you can deal with different situations. After completing each module you are rewarded with some bonus money which then can be used in our stock exchange simulator. You complete courses, earn money, and build virtual wealth.
Each quiz captures different data as to how your overview towards finance is. Is it inclining towards savings or more towards spending.
## How we built it
The project was not simple at all, keeping in mind the various components of the app we first started by creating a fundamental architecture to as to how the our app would be functioning - shorturl.at/cdlxE
Then we took it to Figma where we brainstormed and completed design flows for our prototype -
Then we started working on the App-
**Frontend**
* React.
**Backend**
* Authentication: Auth0
* Storing user-data (courses completed by user, info of stocks purchased etc.): Firebase
* Stock Price Changes: Based on Real time prices using a free tier API (Alpha vantage/Polygon
## Challenges we ran into
The time constraint was our biggest challenge. The project was very backend-heavy and it was a big challenge to incorporate all the backend logic.
## What we learned
We researched about the condition of financial literacy in people, which helped us to make a better product. We also learnt about APIs like Alpha Vantage that provide real-time stock data.
## What's next for Where’s my money?
We are looking to complete the backend of the app to make it fully functional. Also looking forward to adding more course modules for more topics like crypto, taxes, insurance, mutual funds etc.
Domain Name: learnfinancewitheaseusing.tech (Learn-finance-with-ease-using-tech)
|
losing
|
This is the bot that can give information about Dominion card game.
I should answer the following questions and perform following actions:
1. Basic information about a particular card: its image, cost, link to the Dominion Strategy Wiki page
2. List of cards in the Expansion
3. Generate a list of 10 cards to be used in a game of Dominion, with potential for selecting desired Expansions.
The following technologies were used:
1. Microsoft Bot Framework
2. Microsoft Azure (including Azure SQL database)
3. Language Understanding Intelligent Service (LUIS)
The bot was deployed on Telegram messenger
|
## Inspiration
As busy programmers, we are very well acquainted with copy and pasting error output into google and hoping stack overflow has an answer for us. With the advent and sudden popularity of generative AI, the potential to **level-up** our debugging game was apparent. However, it is extremely inconvenient to have to open a new tab (log in if necessary), and prompt Chat-GPT with repetitive commands to understand our error.
This is where *HAL* comes in.
## What it does
What HAL does is - with no intended malice - is to prompt the ubiquitous generative AI from OpenAI from the command line and extension to explain the error output of a program if there is one. What makes HAL so special is that it can tell when an output is an error or not - not through LLM inferences - but rather through rigorous programming.
HAL comes in two forms, a *Command-Line-Interface* and a *Visual Studio Code Extension*. Both perform the same analysis with varying degrees of knowledge.
When you use HAL from the command line as a *CLI*, it spawns a new terminal where the input gets executed as any command from the terminal would. The file's content and error output (if any) gets send to the AI to get interpreted - and displayed to the user.
*Alternately* the VSCode extension - when activated - will spawn a web view that allows you to input the debug error and see the interpreted response.
Currently HAL only knows Node.js (JS) but his capabilities can be expanded to include other languages.
## How we built it
* `npm i`
* in `dir` "server"
+ `npm i`
+ `npm i express`
+ `npm i nodemon`
+ `npm i openai`
+ `npm i http`
+ `npm i cors`
+ `npm i path`
* in `dir` "cli"
+ create `.env` file with `OPENAI_API_KEY` value
+ `npm i readline`
+ `npm i child_process`
+ `npm i fs`
+ `npm i express`
+ `npm i cors`
## Challenges we ran into
We ran into challenges where we didn't know how to handle so many asynchronous calls. We kept console logging pending promises. This was definitely an experience to learn much about `APIs`, `JavaScript`, and `promises`.
## Accomplishments that we're proud of
Our project earned us a referral from a JPMC employee.
## What we learned
More about promises in JS and asynchronous API calls as well as how to build VSCode applications - and the limitations of the API provided.
## What's next for HAL
* **Word Domination**
* Support more languages
* Clean up the web view extension a little
* Integrate web view into activity bar
|
## Inspiration
The Canadian winter's erratic bouts of chilling cold have caused people who have to be outside for extended periods of time (like avid dog walkers) to suffer from frozen fingers. The current method of warming up your hands using hot pouches that don't last very long is inadequate in our opinion. Our goal was to make something that kept your hands warm and *also* let you vent your frustrations at the terrible weather.
## What it does
**The Screamathon3300** heats up the user's hand based on the intensity of their **SCREAM**. It interfaces an *analog electret microphone*, *LCD screen*, and *thermoelectric plate* with an *Arduino*. The Arduino continuously monitors the microphone for changes in volume intensity. When an increase in volume occurs, it triggers a relay, which supplies 9 volts, at a relatively large amperage, to the thermoelectric plate embedded in the glove, thereby heating the glove. Simultaneously, the Arduino will display an encouraging prompt on the LCD screen based on the volume of the scream.
## How we built it
The majority of the design process was centered around the use of the thermoelectric plate. Some research and quick experimentation helped us conclude that the thermoelectric plate's increase in heat was dependent on the amount of supplied current. This realization led us to use two separate power supplies -- a 5 volt supply from the Arduino for the LCD screen, electret microphone, and associated components, and a 9 volt supply solely for the thermoelectric plate. Both circuits were connected through the use of a relay (dependent on the Arduino output) which controlled the connection between the 9 volt supply and thermoelectric load. This design decision provided electrical isolation between the two circuits, which is much safer than having common sources and ground when 9 volts and large currents are involved with an Arduino and its components.
Safety features directed the rest of our design process, like the inclusion of a kill-switch which immediately stops power being supplied to the thermoelectric load, even if the user continues to scream. Furthermore, a potentiometer placed in parallel with the thermoelectric load gives control over how quickly the increase in heat occurs, as it limits the current flowing to the load.
## Challenges we ran into
We tried to implement feedback loop, ambient temperature sensors; even though large temperature change, very small changes in sensor temperatures. Goal to have an optional non-scream controlled system failed because of ultimately not having a sensor feedback system.
We did not own components such as the microphone, relay, or battery pack, we could not solder many connections so we could not make a permanent build.
## Accomplishments that we're proud of
We're proud of using a unique transducer (thermoelectric plate) that uses an uncommon trigger (current instead of voltage level), which forced us to design with added safety considerations in mind.
Our design was also constructed of entirely sustainable materials, other than the electronics.
We also used a seamless integration of analog and digital signals in the circuit (baby mixed signal processing).
## What we learned
We had very little prior experience interfacing thermoelectric plates with an Arduino. We learned to effectively leverage analog signal inputs to reliably trigger our desired system output, as well as manage physical device space restrictions (for it to be wearable).
## What's next for Screamathon 3300
We love the idea of people having to scream continuously to get a job done, so we will expand our line of *Scream* devices, such as the scream-controlled projectile launcher, scream-controlled coffee maker, scream-controlled alarm clock. Stay screamed-in!
|
losing
|
## Inspiration
Many connected objects/services are now available, tracking various information about the user's time slept, distance traveled, geo location, tastes, weather info or absolutely anything. But this segmented data is wasted. Point will compile this data and simplify user’s life.
## What it does
Point understands your needs by learning from your habits to ultimately deliver true smart experiences.
## How we built it
Point is an open platform, combining data from all your connected objects and services, making sense of this huge amount of information to figure out the users' precise context at any given time. Then, sharing this context awareness with all your environment of objects and services, adapting it autonomously to your own specific needs and situation.
## What's next for Point
Work is in progress ;)
|
## Inspiration
We wanted to build the most portable and accessible device for augmented reality.
## What it does
This application uses location services to detect where you are located and if you are in proximity to one of the landmark features based on a predefined set of coordinates. Then a 3d notification message appears with the name and information of the location.
## How we built it
We built on a stack with Objective-C, OpenGL, NodeJS, MongoDB, and Express using XCode. We also did the 3d modelling in Blender to create floating structures.
## Challenges we ran into
Math is hard. Real real hard. Dealing with 3D space, there were a lot of natural challenges dealing with calculations of matrices and quaternions. It was also difficult to calibrate for the scaling between the camera feed calculated in arbitrary units and the real world.
## Accomplishments that we're proud of
We created a functional 3-D augmented reality viewer complete with parallax effects and nifty animations. We think that's pretty cool.
## What we learned
We really honed our skills in developing within the 3-D space.
## What's next for Tour Aug
Possible uses could include location-based advertising and the inclusion of animated 3d characters.
|
## Inspiration
Many students rely on scholarships to attend college. As students in different universities, the team understands the impact of scholarships on people's college experiences. When scholarships fall through, it can be difficult for students who cannot attend college without them. In situations like these, they have to depend on existing crowdfunding websites such as GoFundMe. However, platforms like GoFundMe are not necessarily the most reliable solution as there is no way of verifying student status and the success of the campaign depends on social media reach. That is why we designed ScholarSource: an easy way for people to donate to college students in need!
## What it does
ScholarSource harnesses the power of blockchain technology to enhance transparency, security, and trust in the crowdfunding process. Here's how it works:
Transparent Funding Process: ScholarSource utilizes blockchain to create an immutable and transparent ledger of all transactions and donations. Every step of the funding process, from the initial donation to the final disbursement, is recorded on the blockchain, ensuring transparency and accountability.
Verified Student Profiles: ScholarSource employs blockchain-based identity verification mechanisms to authenticate student profiles. This process ensures that only eligible students with a genuine need for funding can participate in the platform, minimizing the risk of fraudulent campaigns.
Smart Contracts for Funding Conditions: Smart contracts, powered by blockchain technology, are used on ScholarSource to establish and enforce funding conditions. These self-executing contracts automatically trigger the release of funds when predetermined criteria are met, such as project milestones or the achievement of specific research outcomes. This feature provides donors with assurance that their contributions will be used appropriately and incentivizes students to deliver on their promised objectives.
Immutable Project Documentation: Students can securely upload project documentation, research papers, and progress reports onto the blockchain. This ensures the integrity and immutability of their work, providing a reliable record of their accomplishments and facilitating the evaluation process for potential donors.
Decentralized Funding: ScholarSource operates on a decentralized network, powered by blockchain technology. This decentralization eliminates the need for intermediaries, reduces transaction costs, and allows for global participation. Students can receive funding from donors around the world, expanding their opportunities for financial support.
Community Governance: ScholarSource incorporates community governance mechanisms, where participants have a say in platform policies and decision-making processes. Through decentralized voting systems, stakeholders can collectively shape the direction and development of the platform, fostering a sense of ownership and inclusivity.
## How we built it
We used React and Nextjs for the front end. We also integrated with ThirdWeb's SDK that provided authentication with wallets like Metamask. Furthermore, we built a smart contract in order to manage the crowdfunding for recipients and scholars.
## Challenges we ran into
We had trouble integrating with MetaMask and Third Web after writing the solidity contract. The reason was that our configuration was throwing errors, but we had to configure the HTTP/HTTPS link,
## Accomplishments that we're proud of
Our team is proud of building a full end-to-end platform that incorporates the very essence of blockchain technology. We are very excited that we are learning a lot about blockchain technology and connecting with students at UPenn.
## What we learned
* Aleo
* Blockchain
* Solidity
* React and Nextjs
* UI/UX Design
* Thirdweb integration
## What's next for ScholarSource
We are looking to expand to other blockchains and incorporate multiple blockchains like Aleo. We are also looking to onboard users as we continue to expand and new features.
|
partial
|
## Inspiration
Given the current state of events, we were inspired to create a device that was able to mitigate the transfer of bacteria through commonly touched surfaces, as well as provide a more efficient method of keeping track of the number of people entering or exiting a room for health and safety purposes. Due to the fact that this process is carried out manually, it is not possible to ensure that door handles are sterilized as often as they should be, as well, it is very difficult to keep track of the total occupancy of a room, especially if there are multiple entrances. Therefore, by creating this device we ensure that each and every doorknob is sanitized on a needed basis. It is our hope that the implementation of DHD will drastically reduce the transmission of the virus and other diseases.
## What it does
Our product DHD functions by detecting whether a person is approaching a door through the use of an ultrasonic sensor. Once the person opens the door, the Hall effect sensor detects a change in the magnetic field as the process of opening the door separates the magnet from the sensor which triggers the disinfection process. This is indicated by a blue LED turning on. The spraying process then begins as a DC motor rotates and pulls the trigger on the spray bottle for a duration of five seconds. Next, a fan turns on in order to dry the doorknob, so that the door handle is prepared for the next user. this concludes the disinfecting process. The LCD display now registers that one person has entered the room and the "People Counter" increases by 1. After spraying about 500 times, a red LED then turns on to indicate that the spray bottle needs to be refilled. Once the device has been supplied with a new sanitization spray bottle the user must press a push-button to reactivate the device. To account for an individual leaving a room, after detecting a door closing, an ultrasonic sensor will check for an increase in distance as the person walks away. This is registered as an exit and the counter of the number of people in the room will decrease by 1. Essentially, the prototype is responsible for disinfecting a door handle after each subsequent use.
The desktop application functions via a Bluetooth module on the Arduino which is able to transfer the data it collects. Each disinfecting device installed transmits data about its current status, whether the device is on and ready, disinfecting, or off. It is also able to transmit the current number of people within a room, in addition to the maximum allowable occupancy (based on the current health restrictions), and detects cases where the room or facility is overcrowded. Finally, the application keeps track of the status of the devices installed in all rooms and tabulates them.
## Who is it intended for?
This device was mainly built for installation in Office Buildings and for other businesses. As discussed previously, it helps create a safer work environment for employees and helps monitor the occupancy in the entire building autonomously. The enterprise would give an employee or security personnel the job of monitoring the occupancy limit in the various rooms every hour and that of refilling the spray bottles once a day, or as needed.
## How we built it
DHD was built through a collaborative process that involved one member constructing the entire prototype while the other two members worked simultaneously to develop code for the device, as well as creating a desktop application that would collect emitted data from the Bluetooth module and organize it in a user-friendly manner. The prototype was constructed through the use of an Arduino, which was able to control all of the electronic components.
## Challenges we ran into
Due to the fact that we had limited access to resources, it was a challenge to be able to create an entire working prototype with the hardware we had. Specifically, one significant challenge we ran into was the fact that the member responsible for the prototype did not have access to a Bluetooth module, so testing the desktop application in conjunction with the prototype was not possible. Although both projects function as they should carry out the tasks they were designed for as we simulated the connection between the prototype and the Desktop App using a separate Arduino and Bluetooth module, we would need to meet in person in order to ensure that they function together.
## Accomplishments that we are proud of
Given the online nature of the Hackathon, we are very proud to say that we were able to create an entire functional prototype, as well as a desktop application, within a limited time frame. With few resources and capabilities to collaborate together, putting everything together was an accomplishment on its own.
## What we learned
With all of us coming from the discipline of Mechatronics, it was a challenge to push ourselves outside of the scope of what we are taught in class. Since most of what we learn is theoretical, being able to practically apply our skills it was an interesting experience. we were able to explore a variety of sensors and other electronic components, as well as work with Java in an online fashion, where we were able to create a desktop application.
## What's next for DHD Solutions
Moving forward we hope to equip DHD with a UV light in order to ensure the effective sanitization of each door handle. In addition, we hope to organize the electronic components more efficiently so that the device can be as compact as possible, which will include our own PCB design to minimize any extra unnecessary components. Finally, we hope to install a protocol that will adapt the spraying nozzle to accommodate any door handle and to be able to sanitize each one thoroughly.
|
## Overview
AOFS is an automatic sanitization robot that navigates around spaces, detecting doorknobs using a custom trained machine-learning algorithm and sanitizing them using antibacterial agent.
## Inspiration
It is known that in hospitals and other public areas, infections spread via our hands. Door handles, in particular, are one such place where germs accumulate. Cleaning such areas is extremely important, but hospitals are often at a short of staff and the sanitization may not be done as often as should be. We therefore wanted to create a robot that would automate this, which both frees up healthcare staff to do more important tasks and ensures that public spaces remain clean.
## What it does
AOFS travels along walls in public spaces, monitoring the walls. When a door handle is detected, the robot stops automatically sprays it with antibacterial agent to sanitize it.
## How we built it
The body of the robot came from a broken roomba. Using two ultrasonic sensors for movement and a mounted web-cam for detection, it navigates along walls and scans for doors. Our doorknob-detecting computer vision algorithm is trained via transfer learning on the [YOLO network](https://pjreddie.com/darknet/yolo/) (one of the state of the art real-time object detection algorithms) using custom collected and labelled data: using the pre-trained weights for the network, we froze all 256 layers except the last three, which we re-trained on our data using a Google Cloud server. The trained algorithm runs on a Qualcomm Dragonboard 410c which then relays information to the arduino.
## Challenges we ran into
Gathering and especially labelling our data was definitely the most painstaking part of the project, as all doorknobs in our dataset of over 3000 pictures had to be boxed by hand. Training the network then also took a significant amount of time. Some issues also occured as the serial interface is not native to the qualcomm dragonboard.
## Accomplishments that we're proud of
We managed to implement all hardware elements such as pump, nozzle and electrical components, as well as an algorithm that navigated using wall-following. Also, we managed to train an artificial neural network with our own custom made dataset, in less than 24h!
## What we learned
Hacking existing hardware for a new purpose, creating a custom dataset and training a machine learning algorithm.
## What's next for AOFS
Increasing our training dataset to incorporate more varied images of doorknobs and training the network on more data for a longer period of time. Using computer vision to incorporate mapping of spaces as well as simple detection, in order to navigate more intelligently.
|
## Inspiration
With the recent elections, we found it hard to keep up with news outside of the states, and hear of people's perspectives on a global scale. We wanted to make it easier to see and read of news all across the world, to bring a bird-eye's-view on our local problems and our local popular opinions. Geovibes was built to inform.
## What it does
Geovibes is a news feed infograph, displaying news articles from all across the world on an interactive map in real time. We've included pictures of our previous version, from which we pivoted about 7 hours ago. We were attempting to build a 3D interactive globe displaying the same news as the current iteration of Geovibes does.
## How we built it
*Frontend*
* amchart (<https://www.amcharts.com/>)
* lots of handcrafted javascript
* lots of handcrafted css
* mapping coordinates to png
* REST requests to flask api
*Backend*
* flask api server
* threads and file lock
*Data pipeline*
* lots of quota exceeding api subcriptions
* lots of batch processing
* mapping coordinates to locations
* piping one api's results to the next (Bing News Search, Microsoft Text Analytics)
## Challenges we ran from
*Things we pivoted from*
* angular 2
* three.js
* web gl
* twgl (thin wrapper for web gl)
* kartograph.js
* d3.js
* coordinate mapping, especially from window cursor location to 3D globe coordinates within an HTML canvas
* an overambitious idea for an unprepared team
## Accomplishments that I'm proud of
Everything
## What I learned
* batch processing
* concurrency and thread communication
* web gl stuff, and three.js, and the struggles
* coordinate mapping, especially from window cursor location to 3D globe coordinates within an HTML canvas
* angular 2
## What's next for geovibes
get the fkin three.js version working
|
partial
|
## Inspiration
Every single Berkeley student knows that finding housing near campus is one of the most stressful experiences of their lives. (Yes, even more than school sometimes.) We just wanted to make an app that would make the process less difficult. Currently there isn't a good consolidated listing of living spaces. We either have to look through Cragslist or call landlords directly. It's also difficult because we have to consider SO many factors like: price, number of people, and location
## What it does
Makes the house search less stressful!
It provides a platform for sellers to easily upload pictures and info about the places they're renting out.
For general home-seeking population, all the locations are displayed on a map with thumbnails that allow them to easily see info about the place.
## How we built it
We build the app using Swift and Firebase.
Also implemented the Google APIs and SDKs for maps and places.
## Challenges we ran into
Getting Firebase to work properly
Learning Swift
## Accomplishments that we're proud of
Even though we didn't get to everything we wanted. Neither of us were familiar with Swift or Firebase coming in, so it's honestly just quite amazing to see what we could build.
## What we learned
SOO much.
How to use swift. (Creating storyboards and views, creating segues, the language)
and the GoogleMaps SDK (displaying maps, getting locations, placing pins)
and Firebase (setup, adding to the database and storage)
## What's next for HouseMe
Our main goal is to give the user abilities to filter what listings they see. Most students looking have very specific details like price ranges, number of roommates, and location in mind. It would be SUPER super helpful if there was something that could do that for us.
|
Many diffusion models are trained on artists' works without their consent, and artists are powerless against web scrapers.
Takes an image from an artist, and makes human-invisible changes to cause web-scraping classifiers to misidentify the image
React, Flash, and PyTorch
Proud of figuring out how React and Flask worked in one day
Next step is fooling more classifiers and deployment
|
## Inspiration
Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app.
## What it does
Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents!
## How we built it
We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating.
## Challenges we ran into
Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning.
## Accomplishments that we're proud of
We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours.
## What we learned
We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products.
## What's next for EduVoicer
EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
|
losing
|
**## Inspiration**
According to the United Nations in 2013– Global forced displacements tops over fifty million for the first time since the second world war . The largest groups are from Afghanistan, Syria, and Somalia. A large percentage of the people leaving these war-torn countries apply to be refugees but sadly only a small percentage are accepted, this is because of the amount of people that can be accepted and the extensive time to get accepted to be refugee for. The processing time for refugees to come to Canada can take up to fifty-two months and multiple trips to the visa office that can be stationed countries away, interviews, and background checks.
As hackers it is our moral obligation to extend our hand to provide solutions for those in need. Therefore, investing our time and resources into making RefyouFree come to reality would substantially help the lives of individuals looking to refuge or asylum. With so many individuals that are experiencing the hardship of leaving everything behind in hopes for a better future they need all the assistance they can get. Children that are brought up in war-torn countries only know of war and believe that their reason for being alive is to either run away or join the war. There is so much more to life than just war and hardships and RefYouFree will support refugees while finding a better reason for being alive. With this mobile application, there is a way for individuals to message their families, call them, find refuge, and have real-time updates of what is happen around them.
**## What it does**
The purpose of RefyouFree – is to provide a faster, and a more convenient resource to allow those in need to apply to countries as refugee’s to start a better life. It would be end up to be a web + mobile application for iOS.
**## How we built it**
The iOS app was built using xCode, using Objective-C, and Swift. API's used for iOS were Sinch. For the Web app, AWS servers were used to run Ruby on Rails thats front end was written in HTML and CSS.
**## Challenges we ran into**
Challenges that the team and I ran into was how many refugee's had access to smart phones, a computer or the internet. After a bit of research we learned that in Syria, they have 87 phones to 100 people. We are hoping that if people do not have access to these resources that they go to the immigration offices, airports, or seaports where they could apply and hopefully get to use the app.
**## Accomplishments that I'm proud of**
Getting the code to run without an error is always an accomplishment.
**## What I learned**
Team work is key. Without such an amazing and dedicated team I do not believe we could have gotten so far. We come from different places, did not know each other until the hackathon but were able to put our heads together and got it to work! For the iOS developers we learned a ton about API intergration, as well as Swift. For Web Developers, they had learned a lot about server side, backend, frontend, and ruby of rails.
**## What's next for RefyouFree**
Working on this project is happening right after we get out of the hackathon. We have messaged various United Nation Members along with members part of the Canadian Immigration Office to see if we would be allowed to do this idea. Although the team met this weekend, it has a high compatibility and a great work ethic to get stuff done.
|
## Inspiration
One of the biggest challenges faced by families in war effected countries was receiving financial support from their family members abroad. High transaction fees, lack of alternatives and a lack of transparency all contributed to this problem, leaving families struggling to make ends meet.
According to the World Bank, the **average cost of sending remittances to low income countries is a striking 7% of the amount sent**. For conflict affected families, a 7% transaction fee means the difference between putting food on the table or going hungry for days. The truth is that the livelihoods of those left behind vitally depend on remittance transfers. Remittances are of central importance for restoring stability for families in post-conflict countries. At Dispatch, we are committed to changing the lives of war stricken communities. Our novel app allows families to receive money from their loved ones, without having to worry about the financial barriers that had previously stood in their area.
However, the problem is far larger. Economically, over **$20 billion** has been sent back and forth in the United States this year, and we are barely even two months in. There are more than 89 million migrants in the United States itself. In a hugely untapped market that cares little about its customers and is dominated by exploitative financial institutions, we provide the go-to technology-empowered alternative that lets users help their families and friends around the world. We provide a globalized, one-stop shop for sending money across the world.
*Simply put, we are the iPhone of a remittance industry that uses landlines.*
## What problems exist
1. **High cost, mistrust and inefficiency**: Traditional remittance services often charge high fees for their services, which significantly reduces the amount of money that the recipient receives. **A report by the International Fund for Agricultural Development (IFAD) found that high costs of remittance lead to a loss of $25 billion every year for developing countries**. Additionally, they don’t provide clear information on exchange rate and fees, which leads to mistrust among users. Remittance services tend to have an upper limit on how much one can send per transaction, and they end up leading to security issues once money has been sent over. Lastly, these agencies take days to acknowledge, process, and implement a certain transaction, making immediate transfers intractable.
2. **Zero alternatives = exploitation**: It’s also important to note that very few traditional remittance services are offered in countries affected by war. Remittance services tend not to operate in these regions. With extremely limited options, families are left with no option but to accept the high fees and poor exchange rates by these agencies. This isn’t unique to war stricken countries. This is a huge problem in developing countries. Due to the high fees associated with traditional remittance services, many families in developing countries are unable to fully rely on remittance alone to support themselves. As a result, they may turn to alternative financial options that can be exploitative and dangerous. One such alternative is the use of loan sharks, who offer quick loans with exorbitant interest rates, often trapping borrowers in a cycle of debt.
## How we improve the status quo
**We are a mobile application that provides a low-cost, transparent and safe way to remit money. With every transaction made through Dispatch, our users are making a tangible difference in the lives of their loved ones.**
1. **ZERO Transaction fees**: Instead of charging a percentage-based commission fee, we charge a subscription fee per month. This has a number of advantages. Foremost, it offers a cost effective solution for families because it remains the same regardless of the transfer amount. This also makes the process transparent and simpler as the total cost of the transaction is clear upfront.
2. **Simplifying the process**: Due to the complexity of the current remittance process, migrants may find themselves vulnerable to exploitative offers from alternative providers. This is because they don’t understand the details and risks associated with these alternatives. On our app, we provide clear and concise information that guides users through the entire process. A big way of simplifying the process is to provide multilingual support. This not only removes barriers for immigrants, but also allows them to fully understand what’s happening without being taken advantage of.
3. **Transparency & Security**
* Clearly stated and understood fees and exchange rates - no hidden fees
* Real-time exchange rate updates
* Remittance tracker
* Detailed transaction receipts
* Secure user data (Users can only pay when requested to)
4. **Instant notifications and Auto-Payment**
* Reminders for bill payments and insurance renewals
* Can auto-pay bills (will require confirmation each time before its done) so the user remains worry-
free and does not require an external calendar to manage finances
* Notifications for when new requests have been made by the remitter
## How we built it
1. **Backend**
* Our backend is built on an intricate [relational database](http://shorturl.at/fJTX2) between users, their transactions and the 170 currencies and their exchange rates
* We use the robust Checkbook API as the framework to make payments and keep track of the invoices of all payments run through Dispatch
2. **Frontend**
* We used the handy and intuitive Retool environment to develop a rudimentary app prototype, as demonstrated in our [video demo](https://youtu.be/rNj2Ts6ghgA)
* It implements most of the core functionality of our app and makes use of our functional MySQL database to create a working app
* The Figma designs represent our vision of what the end product UI would look like
## Challenges we ran into
1. International money transfer regulations
2. Government restrictions on currencies /embargos
3. Losing money initially with our business model
## Accomplishments that we're proud of
1. Develop an idea with immense social potential
2. Integrating different APIs into one comprehensive user interface
3. Coming from a grand total of no hackathon experience, we were able to build a functioning prototype of our application.
4. Team bonding – jamming to Bollywood music
## What we learned
1. How to use Retool and Checkbook APIs
2. How to deploy a full fledged mobile application
3. How to use MySQL
4. Understanding the challenges faced by migrants
5. Gained insight into how fintech can solve social issues
## What's next for Dispatch
The primary goal of Dispatch is to empower war-affected families by providing them with a cost-effective and reliable way to receive funds from their loved ones living abroad. However, our vision extends beyond this demographic, as we believe that everyone should have access to an affordable, safe, and simple way to send money abroad.
We hope to continuously innovate and improve our app. We hope to utilize blockchain technology to make transactions more secure by providing a decentralized and tamper proof ledger. By leveraging emerging technologies such as blockchain, we aim to create a cutting-edge platform that offers the highest level of security, transparency and efficiency.
Ultimately, our goal is to create a world where sending money abroad is simple, affordable, and accessible to everyone. **Through our commitment to innovation, transparency, and customer-centricity, we believe that we can achieve this vision and make a positive impact on the lives of millions of people worldwide.**
## Ethics
Banks are structurally disincentivized to help make payments seamless for migrants. We read through various research reports, with Global Migration Group’s 2013 Report on the “Exploitation and abuse of international migrants, particularly those in an irregular situation: a human rights approach” to further understand the violation of present ethical constructs.
As an example, consider how bad a 3% transaction fees (using any traditional banking service) can be for an Indian student whose parents pay Stanford tuition -
3 % of $ 82, 162 = $ 2464.86 (USD)
= 204,005.37 (INR) [1 USD = 82.07 INR]
That is, it costs an extra 200,000 Indian rupees for a family that pays Stanford students via a traditional banking service. Consider the fact that, out of 1.4 billion Indians, this is greater than the average annual income for an Indian. Just the transaction fees alone can devastate a home.
Clearly, we don’t destroy homes, hearts, or families. We build them, for everyone without exception.
We considered the current ethical issues that arise with traditional banking or online payment systems. The following ethical issues arise with creating exclusive, expensive, and exploitative payment services for international transfers:
1. Banks earn significant revenue from remittance payments, and any effort to make the process more seamless could potentially reduce their profits.
2. Banks may view migrant populations as a high-risk group for financial fraud, leading them to prioritize security over convenience in remittance payments
3. Remittance payments are often made to developing countries with less developed financial infrastructure, making it more difficult and costly for banks to facilitate these transactions
4. Many banks are large, bureaucratic organizations that may not be agile enough to implement new technologies or processes that could streamline remittance payments.
5. Banks may be more focused on attracting higher-value customers with more complex financial needs, rather than catering to the needs of lower-income migrants.
6. The regulatory environment surrounding remittance payments can be complex and burdensome, discouraging banks from investing in this area.
7. Banks do not have a strong incentive to compete on price in the remittance market, since many migrants are willing to pay high fees to ensure their money reaches its intended recipient.
8. Banks may not have sufficient data on the needs and preferences of migrant populations, making it difficult for them to design effective remittance products and services.
9. Banks may not see remittance payments as a strategic priority, given that they are only a small part of their overall business.
10. Banks may face cultural and linguistic barriers in effectively communicating with migrant populations, which could make it difficult for them to understand and respond to their needs.
Collectively, as remittances lower, we lose out on the effects of trickle-down economics in developing countries, detrimentally harming how they operate and even stunting their growth in some cases. For the above reasons, our app could not be a traditional online banking system.
We feel there is an ethical responsibility to help other countries benefit from remittances. Crucially, we feel there is an ethical responsibility to help socioeconomically marginalized communities help their loved ones. Hence, we wanted to use technology as a means to include, not exclude and built an app that we hope could be versatile and inclusive to the needs of our user. We needed our app design to be helpful towards our user - allowing the user to gain all the necessary information and make bill payments easier to do across the world. We carefully chose product design elements that were not wordy but simple and clear and provided clear action items that indicated what needed to be done. However, we anticipated the following ethical issues arising from our implementation :
1. Data privacy: Remittance payment apps collect a significant amount of personal data from users. It is essential to ensure that the data is used ethically and is adequately protected.
2. Security: Security is paramount in remittance payment apps. Vulnerabilities or data breaches could lead to significant financial losses or even identity theft. Fast transfers can often lead to mismanagement in accounting.
3. Accessibility: Migrants who may be unfamiliar with technology or may not have access to smartphones or internet may be left out of such services. This raises ethical questions around fairness and equity.
4. Transparency: It is important to provide transparent information to users about the costs and fees associated with remittance payment apps, including exchange rates, transfer fees, and any other charges. We even provide currency optimization features, that allows users to leverage low/high exchange rates so that users can save money whenever possible.
5. Inclusivity: Remittance payment apps should be designed to be accessible to all users, regardless of their level of education, language, or ability. This raises ethical questions around inclusivity and fairness.
6. Financial education: Remittance payment apps could provide opportunities for financial education for migrants. It is important to ensure that the app provides the necessary education and resources to enable users to make informed financial decisions.
Conscious of these ethical issues, we came up with the following solutions to provide a more principally robust app:
1. Data privacy: We collect minimal user data. The only information we care about is who sends and gets the money. No extra information is ever asked for. For undocumented immigrants this often becomes a concern and they cannot benefit from remittances. The fact that you can store the money within the app itself means that you don’t need to go through the bank's red-tape just to sustain yourself.
2. Security: We only send user data once the user posts a request from the sender. We prevent spam by only allowing contacts to send those requests to you. This prevents the user from sending large amounts of money to the wrong person. We made fast payments only possible in highly urgent queries, allowing for a priority based execution of transactions.
3. Accessibility: Beyond simple button clicks, we don’t require migrants to have a detailed or nuanced knowledge of how these applications work. We simplify the user interface with helpful widgets and useful cautionary warnings so the user gets questions answered even before asking them.
4. Transparency: With live exchange rate updates, simple reminders about what to pay when and to who, we make sure there is no secret we keep. For migrants, the assurance that they aren’t being “cheated” is crucial to build a trusted user base and they deserve to have full and clearly presented information about where their money is going.
5. Inclusivity: We provide multilingual preferences for our users, which means that they always end up with the clearest presentation of their finances and can understand what needs to be done without getting tangled up within complex and unnecessarily complicated “terms and conditions”.
6. Financial education: We provide accessible support resources sponsored by our local partners on how to best get accustomed to a new financial system and understand complex things like insurance and healthcare.
Before further implementation, we need to robustly test how secure and spam-free our payment system could be. Having a secure payment system is a high ethical priority for us.
Overall, we felt there were a number of huge ethical concerns that we needed to solve as part of our product and design implementation. We felt we were able to mitigate a considerable percentage of these concerns to provide a more inclusive, trustworthy, and accessible product to marginalized communities and immigrants across the world.
|
## Inspiration
I took inspiration from an MLH rubber ducky sticker that I found on my desk. It's better to make *something* rather than nothing, so why not make a cool looking website? There's no user authentification or statistical analysis to it, it's just five rubber duckies vibing out in the neverending vacuum of space. Wouldn't you want to do the same?
## What it does
Sits there and looks pretty (cute).
## How I built it + challenges + accomplishments
I drew everything on my own and used some nifty HTML/CSS to put it all together in the end. It was a pretty simple idea, so the execution reflected it. There were next to no challenges/obstacles save for the obligatory CSS frustration when something doesn't work exactly like you want to. In the end, however, I'm proud that I was able to handle everything from the design to the execution with little to no hassle and finish up within only a few hours. All in all, it was pretty cool.
|
winning
|
## Inspiration: We're trying to get involved in the AI chat-bot craze and pull together cool pieces of technology -> including Google Cloud for our backend, Microsoft Cognitive Services and Facebook Messenger API
## What it does: Have a look - message Black Box on Facebook and find out!
## How we built it: SO MUCH PYTHON
## Challenges we ran into: State machines (i.e. mapping out the whole user flow and making it as seamless as possible) and NLP training
## Accomplishments that we're proud of: Working NLP, Many API integrations including Eventful and Zapato
## What we learned
## What's next for BlackBox: Integration with google calendar - and movement towards a more general interactive calendar application. Its an assistant that will actively engage with you to try and get your tasks/events/other parts of your life managed. This has a lot of potential - but for the sake of the hackathon, we thought we'd try do it on a topic that's more fun (and of course, I'm sure quite a few us can benefit from it's advice :) )
|
## Inspiration
One of our teammate’s grandfathers suffers from diabetic retinopathy, which causes severe vision loss.
Looking on a broader scale, over 2.2 billion people suffer from near or distant vision impairment worldwide. After examining the issue closer, it can be confirmed that the issue disproportionately affects people over the age of 50 years old. We wanted to create a solution that would help them navigate the complex world independently.
## What it does
### Object Identification:
Utilizes advanced computer vision to identify and describe objects in the user's surroundings, providing real-time audio feedback.
### Facial Recognition:
It employs machine learning for facial recognition, enabling users to recognize and remember familiar faces, and fostering a deeper connection with their environment.
### Interactive Question Answering:
Acts as an on-demand information resource, allowing users to ask questions and receive accurate answers, covering a wide range of topics.
### Voice Commands:
Features a user-friendly voice command system accessible to all, facilitating seamless interaction with the AI assistant: Sierra.
## How we built it
* Python
* OpenCV
* GCP & Firebase
* Google Maps API, Google Pyttsx3, Google’s VERTEX AI Toolkit (removed later due to inefficiency)
## Challenges we ran into
* Slow response times with Google Products, resulting in some replacements of services (e.g. Pyttsx3 was replaced by a faster, offline nlp model from Vosk)
* Due to the hardware capabilities of our low-end laptops, there is some amount of lag and slowness in the software with average response times of 7-8 seconds.
* Due to strict security measures and product design, we faced a lack of flexibility in working with the Maps API. After working together with each other and viewing some tutorials, we learned how to integrate Google Maps into the dashboard
## Accomplishments that we're proud of
We are proud that by the end of the hacking period, we had a working prototype and software. Both of these factors were able to integrate properly. The AI assistant, Sierra, can accurately recognize faces as well as detect settings in the real world. Although there were challenges along the way, the immense effort we put in paid off.
## What we learned
* How to work with a variety of Google Cloud-based tools and how to overcome potential challenges they pose to beginner users.
* How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to create docker containers to deploy google cloud-based flask applications to host our dashboard.
How to develop Firebase Cloud Functions to implement cron jobs. We tried to developed a cron job that would send alerts to the user.
## What's next for Saight
### Optimizing the Response Time
Currently, the hardware limitations of our computers create a large delay in the assistant's response times. By improving the efficiency of the models used, we can improve the user experience in fast-paced environments.
### Testing Various Materials for the Mount
The physical prototype of the mount was mainly a proof-of-concept for the idea. In the future, we can conduct research and testing on various materials to find out which ones are most preferred by users. Factors such as density, cost and durability will all play a role in this decision.
|
## Inspiration
In the era of information where innovation and creation are at the forefront,everyone should have the opportunity to shape and innovate, yet countless individuals with disabilities find the doorway to programming out of reach. Recongnizing the pressing need for inclusivity, we were inspired to create a tool that could bridge the gap, leading to the birth of Joynts.
## What it does
Joynt is an inclusive solution designed to usher individuals with physical disabilities into the world of programming and software development. Through a specialized glove that measures finger movements, users can translate distinct hand gestures into specific programming commands. These movements are then processed and interpreted by our dedicated application, paving the way for real-time code creation and execution.
## How we built it
Beginning with the vision of making coding more accesible, we embedded accelerometers into a glove. These sensors detect finger movements, translating them into Morse code. Paired with a microcontroller, the signals are processed and sent to our custom Python IDE. The platform translates the Morse code into executable code, making the coding experience fluid and intuitive.
## Challenges we ran into
One of the most formidable challenges we faced during the development of Joynts was the intricate task of synchronizing two distinct code processes. Creating a system where one code continuously collects data from the accelerometers in real-time, while another concurrently renders and updates our application, proved to be difficult to implement. Striking the perfect balance between these two operations was crucial for ensuring a fluid user experience, and it demanded rigorous testing, optimization, and countless iterations.
## Accomplishments that we're proud of
Beyond the technical achievement of Joynts, what truly fills us with pride is the potential impact on the community. By breaking down barriers in programming, we've opened a world of opportunities for countless individuals, empowering them to innovate, create, and express themselves digitally, irrespective of physical challenges.
## What we learned
The journey of creating Joynts taught us the profound value of inclusive design. We delved deep into the challenges faced by individuals with disabilities, recognizing that the technology and innovation can make this world more inclusive.
## What's next for Joynts
We envision further refining the user experience, expanding language compatibility beyond Python, and integrating adaptive AI for more personalized coding experiences and improved signal processing. We're also excited about potential collaborations with organizations and institutions to bring Joynts to as many deserving hands as possible.
|
winning
|
## Inspiration
We wanted to bring Augmented Reality technologies to an unexpecting space; challenging ourselves to think outside of the box. We were looking for somewhere where user experience could be dramatically improved.
## What it does
Our AR mobile application recognizes DocuSigns QR codes and allows you to either sign up directly or generate an automated signature without ever leaving your phone.
## How we built it
We built it with our awesome brains and
## Challenges we ran into
Implementing the given API and other back-end technologies to actually authenticate and submit the process. We ran into challenges when trying to integrate the digital world with the technical world. There was not much documentation online when it came to merging the two platforms. We also ran into challenges with image recognition of the QR code because AR depends on the environment and lighting.
## Accomplishments that we're proud of
We got a MVP out of the challenge, we did a lot of collaboration and brainstorming which sparked amazing ideas, we spoke to every sponsor to learn about their company and challenges
## What we learned
API's with little documentation and trying to integrate with new technologies can be very challenging. Pay attention to details because it's the small details that will cost you hours of frustration. Through further research we learned about the legalities with digital signatures which sometimes can be a pain point for companies who use eSign like DocuSign.
## What's next for Project 1 AM
To Present to all judges and hopefully the idea gets bought in and implemented to make customer's lives easier
|
## Inspiration
Have you ever looked back at your year, your month, or even your week and felt like it passed in a blur without you being sure of what really happened? Existing journal/diary apps are oftentimes too free range to become a reoccurring habit and it can oftentimes feel tedious to sit down and write with pen and paper. Here is where TuneIn comes in. TuneIn is a mindfulness app that helps everyday moments become more meaningful and fun.
## What it does
Each day you are given a daily prompt that helps guide you in recognizing the beauty of life. You may be asked, “What was the highlight of your day?” or “What song did you listen to made you feel like you’re in a movie?”. These questions encourage you to actively think and be mindful of the little things that happen each and every day. We incorporated a streak mechanism that motivates you to fill out prompts on a regular basis. Visiting the reflections tab allows you to view all of you previous answers — it’s a wonderful way to look back at all of the small, but worthwhile moments that make life beautiful.
## How we built it
This is an IOS app designed using Figma, and built with React Native Expo, in order to make this Android compatible in the future. Each entry is managed and stored in MongoDB database allowing for handling a high volume of data.
|
## Inspiration
In the exciting world of hackathons, where innovation meets determination, **participants like ourselves often ask "Has my idea been done before?"** While originality is the cornerstone of innovation, there's a broader horizon to explore - the evolution of an existing concept. Through our AI-driven platform, hackers can gain insights into the uniqueness of their ideas. By identifying gaps or exploring similar projects' functionalities, participants can aim to refine, iterate, or even revolutionize existing concepts, ensuring that their projects truly stand out.
For **judges, the evaluation process is daunting.** With a multitude of projects to review in a short time frame, ensuring an impartial and comprehensive assessment can become extremely challenging. The introduction of an AI tool doesn't aim to replace the human element but rather to enhance it. By swiftly and objectively analyzing projects based on certain quantifiable metrics, judges can allocate more time to delve into the intricacies, stories, and the passion driving each team
## What it does
This project is a smart tool designed for hackathons. The tool measures the similarity and originality of new ideas against similar projects if any exist, we use web scraping and OpenAI to gather data and draw conclusions.
**For hackers:**
* **Idea Validation:** Before diving deep into development, participants can ascertain the uniqueness of their concept, ensuring they're genuinely breaking new ground.
* **Inspiration:** By observing similar projects, hackers can draw inspiration, identifying ways to enhance or diversify their own innovations.
**For judges:**
* **Objective Assessment:** By inputting a project's Devpost URL, judges can swiftly gauge its novelty, benefiting from AI-generated metrics that benchmark it against historical data.
* **Informed Decisions:** With insights on a project's originality at their fingertips, judges can make more balanced and data-backed evaluations, appreciating true innovation.
## How we built it
**Frontend:** Developed using React JS, our interface is user-friendly, allowing for easy input of ideas or Devpost URLs.
**Web Scraper:** Upon input, our web scraper dives into the content, extracting essential information that aids in generating objective metrics.
**Keyword Extraction with ChatGPT:** OpenAI's ChatGPT is used to detect keywords from the Devpost project descriptions, which are used to capture project's essence.
**Project Similarity Search:** Using the extracted keywords, we query Devpost for similar projects. It provides us with a curated list based on project relevance.
**Comparison & Analysis:** Each incoming project is meticulously compared with the list of similar ones. This analysis is multi-faceted, examining the number of similar projects and the depth of their similarities.
**Result Compilation:** Post-analysis, we present users with an 'originality score' alongside explanations for the determined metrics, keeping transparency.
**Output Display:** All insights and metrics are neatly organized and presented on our frontend website for easy consumption.
## Challenges we ran into
**Metric Prioritization:** Given the timeline restricted nature of a hackathon, one of the first challenges was deciding which metrics to prioritize. Striking the balance between finding meaningful data points that were both thorough and feasible to attain were crucial.
**Algorithmic Efficiency:** We struggled with concerns over time complexity, especially with potential recursive scenarios. Optimizing our algorithms, prompt engineering, and simplifying architecture was the solution.
*Finding a good spot to sleep.*
## Accomplishments that we're proud of
We took immense pride in developing a solution directly tailored for an environment we're deeply immersed in. By crafting a tool for hackathons, while participating in one, we felt showcases our commitment to enhancing such events. Furthermore, not only did we conceptualize and execute the project, but we also established a robust framework and thoughtfully designed architecture from scratch.
Another general Accomplishment was our team's synergy. We made efforts to ensure alignment, and dedicated time to collectively invest in and champion the idea, ensuring everyone was on the same page and were equally excited and comfortable with the idea. This unified vision and collaboration were instrumental in bringing HackAnalyzer to life.
## What we learned
We delved into the intricacies of full-stack development, gathering hands-on experience with databases, backend and frontend development, as well as the integration of AI. Navigating through API calls and using web scraping were also some key takeaways. Prompt Engineering taught us to meticulously balance the trade-offs when leveraging AI, especially when juggling cost, time, and efficiency considerations.
## What's next for HackAnalyzer
We aim to amplify the metrics derived from the Devpost data while enhancing the search function's efficiency. Our secondary and long-term objective is to transition the application to a mobile platform. By enabling students to generate a QR code, judges can swiftly access HackAnalyzer data, ensuring a more streamlined and effective evaluation process.
|
losing
|
## Inspiration
The inspiration came from the desire to learn about sophisticated software without the massive financial burden that comes with premium hardware in drones. So the question arose, what if the software could be utilized on any drone and made it available open source.
## What it does
The software allows any drone to track facial movement and hand gestures and to fly and move keeping the user in the center of the frame, this can be utilized at multiple different levels! We aim out technology to help and develop the experience of photographers with the hands-off control, and decrease the barrier to entry to drone by making it simpler to use.
## How we built it
We mainly used python with the help of libraries and frameworks like PyTorch, YoloV8, MediaPipe, OpenCV, tkinter, PIL, DJITello etc.
## Challenges we ran into
While implementing hand-gesture commands, we had a setback and faced an unsolved problem (yet). The integration between face recognition, hand recognition, drone functions etc. was harder than we anticipated to since it had a lot of moving parts that we needed to connect.
None of us had any UI experience so creating the interface was a challenge too.
## Accomplishments that we're proud of
We have implemented pivot-tracking and move-tracking features. Pivot-tracking allows user to make the drone stationary while turning its axis to follow the user. Move-tracking is basically having your drone on a hands-free leash (it follows you anywhere you go)!
We implemented a accurate hand gesture recognition, although we are yet to implemented new functions attached to the gestures.
A lot of the framework was brand new information to us but we were still able to learn it and create a functional software.
## What we learned
Understanding project scope and what can be done in limited time was an important lesson for us that we will definitely take moving forward.
We learnt a lot of new frameworks like MediaPipe, YoloV8, DJITello, thinkter, PIL
## What's next for Open Droid
Adding functions attached to the hand gestures, adding a sling shot feature etc.
Since our hand recognition software can detect two hands, the left hand will control the mod of the drone (low fly, high fly, slow fly, fast fly, default) and the right hand will control functions (go back, come closer, circle around me, slingshot, land on my hand etc.)
After accomplishing these goals we would like to make the software more user friendly and open source!
|
## Inspiration
We were inspired by current research in EEG and brain-controlled devices, and wanted to see how far we could push it as an input method.
## What it does
Our solution is a quick and intuitive method to control drones, eliminating bulky controls. You can control a drone completely with facial gestures and movements, requiring very little training to operate. It constantly records location/temperature/humidity and identifies objects/people in a video stream. We see this being useful in rescue missions and in helping people easily explore new environments - without requiring an experienced drone controller, and without the danger of being in the environment.
## How we built it
We integrated the Muse headband with the Tello drone, drawing on EEG research. We apply signal processing paradigms onto input EEG waveforms to associate different waveforms with unique facial gestures. We then extract a movement "intention" and move the drone in that direction. We focused on making the controls intuitive and immersive; we wanted to make it feel like an extension of the human user.
## Challenges we ran into
We were unable to get the video feed working simultaneously with the drone controller and signal processing units. We attempted a multi-threaded approach by running 3 processes in parallel, but were unable to get it working properly. Only 2 processes would run simultaneously :(
## Accomplishments that we're proud of
We were able to extract movement intention with relatively high fidelity!
## What we learned
We experienced a rigorous design and prototyping process in 24 hours, and learned plenty about how we work as a team!
## What's next for EagleEye
We hope to integrate machine learning algorithms to tailor the signal processing to variations in EEG waveform behaviour.
|
## Inspiration
At work, conference calls usually involves multiple people on one side using the same microphone. It may be hard to know who's speaking and what their role is. Furthermore, some details of the meeting can be lost and it's tedious to note everything down.
## What it does
Our app distinguishes/recognizes speakers, shows who's speaking and automatically transcribe the meeting in real time. When the meeting ends, our app can also export the meeting minutes (log of who said what at what time).
**Features**:
* display who's currently speaking using speaker recognition
* transcribe what's being said by who like a chat application
* create and train a new speaker profile within 15 seconds
* stream transcription to services such as `Slack`
* export transcription to cloud storage such as `Google Sheets`
## How I built it
* Microsoft Speech Recognition API
* Microsoft Speech to Text API
* Google Cloud Speech to Text API
* Google Sheets API
* Slack API
* stdlib for integrating services for the backend such as Slack and SMS
* NodeJS with Express for the backend
* Vue for the frontend
* Python scripts for accessing Microsoft's APIs
* Love ❤️
## Challenges I ran into
Generating the audio file in the right format for Microsoft's API was tougher than expected; seems like Mac's proprietary microphone isn't able to format the audio in the way Microsoft wants it.
## Accomplishments that I'm proud of
* Learning how to use the APIs, Microsoft Azure, and sampling an audio input to a format the API needs.
* Finishing an app before the deadline.
## What I learned
Usage of many APIs, speech recording, and integration of multiple services.
## What's next for Who Said What?
A year long worldwide tour to show.
|
losing
|
## Inspiration
We wanted to use a RaspberryPi in an innovative way. We also wanted promote a healthy lifestyle.
## What it does
Detects what you place in a fridge and predicts when the food while go bad.
## How I built it
RaspberryPi camera and python flask webserver application.
## Challenges I ran into
Getting OCR to work properly with the photos taken with the pi camera
## Accomplishments that I'm proud of
EVERYTHING
## What I learned
3D printing, RaspberryPi, Flask, Python, Javascript, Postmates API, HTML5/CSS3
## What's next for PiM: The Virtual Fridge Manager
Reading barcodes and expiry dates to get more accurate predictions
|
## Inspiration
One of our close friends is at risk of Alzheimer's. He learns different languages and engages his brain by learning various skills which will significantly decrease his chances of suffering from Alzheimer's later. Our game is convenient for people like him to keep the risks of being diagnosed with dementia at bay.
## What it does
In this game, a random LED pattern is displayed which the user is supposed to memorize. The user is supposed to use hand gestures to repeat the memorized pattern. If the user fails to memorize the correct pattern, the buzzer beeps.
## How we built it
We had two major components to our project; hardware and software. The hardware component of our project used an Arduino UNO, LED lights, a base shield, a Grove switch and a Grove gesture sensor. Our software side of the project used the Arduino IDE and GitHub. We have linked them in our project overview for your convenience.
## Challenges we ran into
Some of the major challenges we faced were storing data and making sure that the buzzer doesn't beep at the wrong time.
## Accomplishments that we're proud of
We were exploring new terrain in this hackathon with regard to developing a hardware project in combination with the Arduino IDE. We found that it was quite different in relation to the software/application programming we were used to, so we're very happy with the overall learning experience.
## What we learned
We learned how to apply our skillset in software and application development in a hardware setting. Primarily, this was our first experience working with Arduino, and we were able to use this opportunity at UofT to catch up to the learning curve.
## What's next for Evocalit
Future steps for our project look like revisiting the iteration cycles to clean up any repetitive inputs and incorporating more sensitive machine learning algorithms alongside the Grove sensors so as to maximize the accuracy and precision of the user inputs through computer vision.
|
## Summary
OrganSafe is a revolutionary web application that tackles the growing health & security problem of black marketing of donated organs. The verification of organ recipients leverages the Ethereum Blockchain to provide critical security and prevent improper allocation for such a pivotal resource.
## Inspiration
The [World Health Organization (WHO)](https://slate.com/business/2010/12/can-economists-make-the-system-for-organ-transplants-more-humane-and-efficient.html) estimates that one in every five kidneys transplanted per year comes from the black market. There is a significant demand for solving this problem which impacts thousands of people every year who are struggling to find a donor for a significantly need transplant. Modern [research](https://ieeexplore.ieee.org/document/8974526) has shown that blockchain validation of organ donation transactions can help reduce this problem and authenticate transactions to ensure that donated organs go to the right place!
## What it does
OrganSafe facilitates organ donations with authentication via the Ethereum Blockchain. Users can start by registering on OrganSafe with their health information and desired donation and then application's algorithms will automatically match the users based on qualifying priority for available donations. Hospitals can easily track donations of organs and easily record when recipients receive their donation.
## How we built it
This application was built using React.js for the frontend of the platform, Python Flask for the backend and API endpoints, and Solidity+Web3.js for Ethereum Blockchain.
## Challenges we ran into
Some of the biggest challenges we ran into were connecting the different components of our project. We had three major components: frontend, backend, and the blockchain that were developed on that needed to be integrated together. This turned to be the biggest hurdle in our project that we needed to figure out. Dealing with the API endpoints and Solidarity integration was one of the problems we had to leave for future developments. One solution to a challenge we solved was dealing with difficulty of the backend development and setting up API endpoints. Without a persistent data storage in the backend, we attempted to implement basic storage using localStorage in the browser to facilitate a user experience. This allowed us to implement a majority of our features as a temporary fix for our demonstration. Some other challenges we faced included figuring certain syntactical elements to the new technologies we dealt (such as using Hooks and States in React.js). It was a great learning opportunity for our group as immersing ourselves in the project allowed us to become more familiar with each technology!
## Accomplishments that we're proud of
One notable accomplishment is that every member of our group interface with new technology that we had little to no experience with! Whether it was learning how to use React.js (such as learning about React fragments) or working with Web3.0 technology such as the Ethereum Blockchain (using MetaMask and Solidity), each member worked on something completely new! Although there were many components we simply just did not have the time to complete due to the scope of TreeHacks, we were still proud with being able to put together a minimum viable product in the end!
## What we learned
* Fullstack Web Development (with React.js frontend development and Python Flask backend development)
* Web3.0 & Security (with Solidity & Ethereum Blockchain)
## What's next for OrganSafe
After TreeHacks, OrganSafe will first look to tackle some of the potential areas that we did not get to finish during the time of the Hackathon. Our first step would be to finish development of the full stack web application that we intended by fleshing out our backend and moving forward from there. Persistent user data in a data base would also allow users and donors to continue to use the site even after an individual session. Furthermore, scaling both the site and the blockchain for the application would allow greater usage by a larger audience, allowing more recipients to be matched with donors.
|
losing
|
## Inspiration:
I lost my grandmother to an incompetent doctor and almost lost my partner during travel to a rare tropical illness that was misdiagnosed by two doctors. Additionally, my military service in austere parts of the world illuminated varying degrees of poverty to include a lack of access to healthcare services. CarpeMed's vision is to ensure that access to uncompromised healthcare isn't only for the wealthy. Ten years from now we want to reduce medical deserts by fifty percent ensuring that historically marginalized groups and those in emerging markets benefit from the synergies of technology and medical advancements. The CarpeMed app will facilitate early diagnosis, minimize logistical hurdles, and be the primary platform for swift humanitarian response.
## What it does:
CarpeMed is built for international travel and provides global access to your health data and health providers so you can travel freely whether prepared or not. CarpeMed's mobile-optimized platform allows you to carry critical medical information with you for tailored medical treatment with vetted medical providers no matter where you are in the world. CarpeMed sits at the intersection of travel and healthcare serving those with pre-existing medical conditions and emergent medical needs by allowing the former to curate a travel experience with informed proximity to the best treatment.
## How we built it:
We decided on Google Firebase for our database. The front end utilizes ReactJS to create an experience for both Android and iOS devices, as well as providing tools for map integration. The application interfaces with Google Firebase for a secure login protocol.
## Challenges we ran into:
We weren't able to integrate mapbox with React Native, so we used react-native-maps.
## Accomplishments that we're proud of:
CarpeMed's user interface and the cohesion of our team. As a combat veteran, the strength of the team is essential and this was a great experience.
## What we learned:
We learned more about the capabilities of React Native, as well as the synergies between our mission and the user experience. To be sure, the intent is for the user to feel embraced in the intuitive care of our app.
## What's next for CarpeMed:
Transitioning from a pre-accelerator program at Berkeley Haas to an Accelerator Program in the Spring.
|
## Inspiration
We wanted to create a game during to ease the boredom of quarantine, so we developed a text-based endless dungeon crawler.
## What it does
Dungeon dwellers is a dungeon crawling game wherein you fight your way up an endless dungeon against monsters to find better gear and level up.
## How we built it
We built it using Java and JavaSwing for our GUI. Java was used to create all the objects that would be used in order for the functionality. JavaSwing was used to create a GUI for the users ease of use.
## Challenges we ran into
We had challenges integrating the front and back end.
## Accomplishments that we're proud of
We are proud that the game functions as wanted, and has a degree of randomness which makes every floor of the game feel a little different.
## What we learned
We learned how to work in a team while completing a common goal. Each person had their own tasks to complete and when they came together worked well.
## What's next for Dungeon Dwellers
We will continue to work on the game in order to refine the game and add more features to make it a even better game.
|
## About Us
Discord Team Channel: #team-64
omridan#1377,
dylan28#7389,
jordanbelinsky#5302,
Turja Chowdhury#6672
Domain.com domain: positivenews.space
## Inspiration
Over the last year headlines across the globe have been overflowing with negative content which clouded over any positive information. In addition everyone has been so focused on what has been going on in other corners of the world and have not been focusing on their local community. We wanted to bring some pride and positivity back into everyone's individual community by spreading positive headlines at the users users location. Our hope is that our contribution shines a light in these darkest of times and spreads a message of positivity to everyone who needs it!
## What it does
Our platform utilizes the general geolocation of the user along with a filtered API to produce positive articles about the users' local community. The page displays all the articles by showing the headlines and a brief summary and the user has the option to go directly to the source of the article or view the article on our platform.
## How we built it
The core of our project uses the Aylien news API to gather news articles from a specified country and city while reading only positive sentiments from those articles. We then used the IPStack API to gather the users location via their IP Address. To reduce latency and to maximize efficiency we used JavaScript in tandem with React opposed to a backend solution to code a filtration of the data received from the API's to display the information and imbed the links. Finally using a combination of React, HTML, CSS and Bootstrap a clean, modern and positive design for the front end was created to display the information gathered by the API's.
## Challenges we ran into
The most significant challenge we ran into while developing the website was determining the best way to filter through news articles and classify them as "positive". Due to time constraints the route we went with was to create a library of common keywords associated with negative news, filtering articles with the respective keywords out of the dictionary pulled from the API.
## Accomplishments that we're proud of
We managed to support a standard Bootstrap layout comprised of a grid consisting of rows and columns to enable both responsive design for compatibility purposes, and display more content on every device. Also utilized React functionality to enable randomized background gradients from a selection of pre-defined options to add variety to the site's appearance.
## What we learned
We learned a lot of valuable skills surrounding the aspect of remote group work. While designing this project, we were working across multiple frameworks and environments, which meant we couldn't rely on utilizing just one location for shared work. We made combined use of Repl.it for core HTML, CSS and Bootstrap and GitHub in conjunction with Visual Studio Code for the JavaScript and React workloads. While using these environments, we made use of Discord, IM Group Chats, and Zoom to allow for constant communication and breaking out into sub groups based on how work was being split up.
## What's next for The Good News
In the future, the next major feature to be incorporated is one which we titled "Travel the World". This feature will utilize Google's Places API to incorporate an embedded Google Maps window in a pop-up modal, which will allow the user to search or navigate and drop a pin anywhere around the world. The location information from the Places API will replace those provided by the IPStack API to provide positive news from the desired location. This feature aims to allow users to experience positive news from all around the world, rather than just their local community. We also want to continue iterating over our design to maximize the user experience.
|
losing
|
## Inspiration
As college students ourselves, we've been through a few lectures. Sometimes we have conflicting activities and can't make it to class. Sometimes we don't have conflicting activities and *still* don't make it to class. Or perhaps we attend class and wish we could rewind time to hear what the professor said just once more. This serves as the inspiration behind **Recap.ai**. While it is true that some lectures are recorded and uploaded by professors, many are not and those that are available do not provide closed captioning. Therefore, students that are hearing impaired are at a major disadvantage when it comes to obtaining the content from lecture. Recap.ai bridges that gap by enabling students and professors to share lecture transcripts and promote access to knowledge.
## What it does
Recap.ai provides a platform for students and professors to record lectures and upload the audio to assembly.ai, so that the transcripts can be available on our site. Users select whether they are a student or professor, which school they attend, which class they want to view, and whether they want to view existing transcriptions or record a new lecture. To record a lecture, simply select the record option and click start. When the recording is finished, click stop and the recording will automatically be uploaded to assembly.ai which generates the transcription which is then sent to a server and returned to the user. If the user opts to watch existing lectures, they will be brought to a page containing a list of previous recorded lectures, which each link to that lecture's transcription.
## How we built it
First, we prototyped our project on Figma to streamline the process of collaborative idea generation. Then we started development. This is a react.js based web application, which allows us to utilize state and build dynamic web pages. We used tailwind-css to enhance the UI, assembly.ai to provide transcriptions for the lecture audio, and firebase to store key information on the uploaded lectures.
## Challenges we ran into
One common theme for the challenges we encountered was us taking on lofty goals and then immediately trying to achieve them, instead of splitting up the task into more attainable subtasks. For example, we wanted to quickly develop the frontend of our application, so we could focus on the more technical aspects of the backend. One of our team members heard of an application called plasmic, [link](https://www.plasmic.app/), that would enable us to collaboratively construct the frontend of our project using drag-and-drop mechanics, and then automatically convert our design into react code. There were a few problems with this approach. One was that we lost some customizability of our design by only using prebuilt components. Another was that there was a large overhead within the generated react code, which made modifications especially tedious. After close to 2 hours of using plasmic, we had a group discussion and decided to hedge our losses. We abandoned our work on plasmic, migrated our design to Figma, and built the frontend from scratch. Another challenge we faced was using the API for assembly.ai and Firebase. Although there is ample documentation for both APIs, we struggled to place the pieces of the puzzle together when it came time to combine them. There was a lot of trial and error involved with constructing the proper flow of data among the many components. This may have been one of the most time consuming aspects of the project.
## Accomplishments that we're proud of
We built and deployed and a full stack web application is less than 36 hours. Although every group at this hackathon completed a project in the same time frame, we are especially proud of our project since 2 members in our group had zero hackathon experience. There was definitely a steep learning curve involved with this project, but we overcame it and left the hackathon wiser than when we entered.
## What we learned
We learned a few valuable lessons from the challenges that we faced. Firstly, we should always plan before we make major decisions. The time taken to plan is always worth avoiding the time spent possibly moving down the wrong path. Secondly, we should not use tools that we don't fully understand as they can sometimes do more harm than good. It may be enticing to use the latest and greatest technology in your project, but it's benefit is lost without knowledge of it's workings. Lastly, don't fall into the sunk cost fallacy: it's never too late to do something the right way. Starting over is usually less of a hassle than we make it out to be, and prevents headaches down the road.
## What's next for Recap.ai
Although we accomplished a lot in the past 36 hours, there is much more we would like to add. For example, we want to add the possibility for live transcription, which is enabled through the premium version of assembly.ai. We also want to include the audio in our stored lectures instead of just the transcriptions. Additionally, we would like to highlight the current word in the transcript when playing back the audio. Another feature we would like to add is that upon trying to record a lecture, users will receive a notification when another user is already currently recording that lecture. This would prevent multiple copies of the same lecture from appearing in the database.
|
## Inspiration
We were inspired by Pokemon Go and Duolingo. We believe that exploring the world and learning a new language pair up well together. Associating new vocabulary with real experiences helps you memorize it better.
## What it does
* Explore your surroundings to find landmarks around you.
* To conquer a landmark you will have to find and photograph objects that represent the vocabulary words we assigned to the land mark.
* As you conquer landmarks you will level up and more advanced landmarks (with more advanced vocab) will be unlocked.
* Can you conquer all the landmarks and become a Vocab Master?
## How we built it
* We used react native for the app, we used react-native-maps for the map, we used expo camera for the camera, and we used Python for the backend.
* we used Google Cloud Vision API for object recognition and annotated images with identified key object labels
-we used Google Cloud Translate API to translate the names of identified objects to the user's selected target language.
* we used the Gemini API to generate useful questions based on identified objects in the picture the user takes.
## Challenges we ran into
-Stitching together our front and back end
-We ran into issues with Firebase deployment in particular, as with other Flask app hosting services
## Accomplishments that we're proud of
-Created a unique UI that interfaces with the camera on the user's mobile device
-Creating a landmark exploration map for users to embark on vocabulary challenges
-Creating a quiz functionality using react native that ensures users review their learned vocabulary regularly. This works by requiring users to select the correct translations of words and phrases from previous photos every fifth photo taken.
-Developing a Python backend that takes the URI of an image as input and returns an image annotated with key objects, as well as translations for those objects in selected target language and example sentences using the identified objects in the target language based on the user's surroundings.
## What we learned
-Repository branch organization is important (keep git good)
-Dividing tasks among teammates leads to more rapid progress, but integration can be challenging
-Prioritization is key, when one thing is not working it is better to get some working functionality given the time constraints of the Hackathon.
## What's next for Pocket Vocab by Purple Cow
-Polishing the UI and integration between all the services and ensuring features work seamlessly together
-Add additional features to increase social aspects of the app. Such as conquering landmarks by holding the record for largest vocabulary at a certain landmark
|
## Inspiration
We are two college students renting a house by ourselves with a high energy bill due to heating during Canada’s winter. The current solutions in the market are expensive AND permanent. We cannot make permanent changes to a house we rent, and we couldn’t afford them to begin with.
## What it does
Kafa is a thermostat enhancer with an easy installation that lets you remove it at any time with no assistance. There’s no need to get handy playing with electrical wires and screwdrivers. Just simply take Kafa out from the box, clip over your existing thermostat, and slide in the battery. If you switch apartments, offices, or dorm rooms, take Kafa with you. Simply clip off!
Kafa saves you money in installation fees, acquisition of hardware, and power bill. It keeps track of your usage patterns and even allows you to set up power saving mode.
## How we built it
The Kafa body was modelled using Fusion 360. The CAD models for the electronic components were sourced from Grab CAD. Everything else was modelled from scratch.
For the electronics we used an SG 90 servo that we hacked, an analog to digital converter, Raspberry pi Zero, a buck converter, temperature sensor, RGB LED, a potentiometer, and a battery we took from a camera light. We 3D printed the body of Kafa so that it would hold the individual components together in a compact manner. We then wired it all up together.
On the software side, Kafa is built using docker containers, which makes it highly portable, modular, secure and scalable. These containers run flask web apps that serve as controllers and actuators easily accessible by any browser enabled device; to store data we use a container running a MySQL database.
## Challenges we ran into
The most challenging aspect of the physical design was staying true to the premise of “easy installation” by coming up with non-permanent methods of attachment to the thermostats at our home. We wanted to design something that didn’t use screws, bolts, glue, tape, etc. Designing the case to be compact whilst planning for cable management was also hard.
The most challenging part of the software development was the servo calibration which allows it to adapt to any thermostat dial. To accomplish this, we had to 'hack' the servo and solder a cable to the variable resistor in order to read its position.
## Accomplishments that we're proud of
The most rewarding aspect of the physical design was accurately predicting the behaviour of the physical components and how they would fit once inside the compact case. Foreseeing and accounting for all possible issues that would come up in manufacturing whilst still in the CAD program made for the construction of our project to run much more smoothly (mostly).
The accomplishment, with regards to software, that we are most proud of is that everything is containerized. This means that in order to replicate our setup you just need to run the docker images in the destination devices.
## What we learned
One of the most important lessons we learned is effectively communicating technical information to each other regarding our respective engineering disciplines (mechanical and computer). We also learned about the potential of IoT devices to be applied in the most simple and unforeseen ways.
## What's next for Kafa - Thermostat Enhancer
To improve Kafa in future iterations we would like to:
* Optimize circuitry to use low power, Wi-Fi enabled MCU so that battery life lasts months instead of hours
* Implement a learning algorithm so that Kafa can infer your active hours and save even more electricity
* Develop universal attachment mechanisms to fit any brand and shape of thermostat.
## Acknowledgments
* [docker-alpine-pigpiod - zinen](https://github.com/zinen/docker-alpine-pigpiod)
* [Nest Thermostat Control - Dal Hundal](https://codepen.io/dalhundal/pen/KpabZB)
* [Raspberry Pi Zero W board - Vittorinco](https://grabcad.com/library/raspberry-pi-zero-w-board-1)
* [USB Cable - 3D-2D CAD Design](https://grabcad.com/library/usb-cable-31)
* [Micro USB Plug - Yuri Malina](https://grabcad.com/library/micro-usb-plug-1)
* [5V-USB-Booster - Erick Robles](https://grabcad.com/library/5v-usb-booster-1)
* [Standard Through Hole Potentiometer (Vertical & Horizontal) - Abel Villanueva](https://grabcad.com/library/standard-through-hole-potentiometer-vertical-horizontal-1)
* [SG90 - Micro Servo 9g - Tower Pro - Matheus Frasson](https://grabcad.com/library/sg90-micro-servo-9g-tower-pro-1)
* [Volume Control Rotary Knobs - Kevin Yu](https://grabcad.com/library/volume-control-rotary-knobs-1)
* [Led RGB 5mm - Terrapon Théophile](https://grabcad.com/library/led-rgb-5mm)
* [Pin Headers single row - singlefonts](https://grabcad.com/library/pin-headers-single-row-1)
* [GY-ADS1115 - jalba](https://grabcad.com/library/gy-ads1115-1)
|
losing
|
## Inspiration
Our inspiration for Sustain-ify came from observing the current state of our world. Despite incredible advancements in technology, science, and industry, we've created a world that's becoming increasingly unsustainable. This has a domino effect, not just on the environment, but on our own health and well-being as well. With rising environmental issues and declining mental and physical health, we asked ourselves: *How can we be part of the solution?*
We believe that the key to solving these problems lies within us—humans. If we have the power to push the world to its current state, we also have the potential to change it for the better. This belief, coupled with the idea that *small, meaningful steps taken together can lead to a big impact*, became the core principle of Sustain-ify.
## What it does
Sustain-ify is an app designed to empower people to make sustainable choices for the Earth and for themselves. It provides users with the tools to make sustainable choices in everyday life. The app focuses on dual sustainability—a future where both the Earth and its people thrive.
Key features include:
1. **Eco Shopping Assistant**: Guides users through eco-friendly shopping.
2. **DIY Assistant**: Offers DIY sustainability projects.
3. **Health Reports**: Helps users maintain a healthy lifestyle.
## How we built it
Sustain-ify was built with a range of technologies and frameworks to deliver a smooth, scalable, and user-friendly experience.
Technical Architecture:
Frontend Technologies:
* Frameworks: Flutter (Dart), Streamlit (Python) were used for the graphical user interface (GUI/front-end).
* Services in Future: Integration with third-party services such as Twilio, Lamini, and Firebase for added functionalities like messaging and real-time updates.
Backend & Web Services:
* Node.js & Express.js: For the backend API services.
* FastAPI: RESTful API pipeline used for HTTP requests and responses.
* Appwrite: Backend server for authentication and user management.
* MongoDB Atlas: For storing pre-processed data chunks into a vector index.
Data Processing & AI Models:
* ScrapeGraph.AI: LLM-powered web scraping framework used to extract structured data from online resources.
* Langchain & LlamaIndex: Used to preprocess scraped data and split it into chunks for efficient vector storage.
* BGE-Large Embedding Model: From Hugging Face, used for embedding textual content.
* Neo4j: For building a knowledge graph to improve data retrieval and structuring.
* Gemini gpt-40 & Groq: Large language models used for inference, running on LPUs (Language Processing Units) for a sustainable inference mechanism.
Additional Services:
* Serper: Provides real-time data crawling and extraction from the internet, powered by LLMs that generate queries based on the user's input.
* Firebase: Used for storing and analyzing user-uploaded medical reports to generate personalized recommendations.
Authentication & Security:
* JWT (JSON Web Tokens): For secure data transactions and user authentication.
## Challenges we ran into
Throughout the development process, we faced several challenges:
1. Ensuring data privacy and security during real-time data processing.
2. Handling large amounts of scraped data from various online sources and organizing it for efficient querying and analysis.
3. Scaling the inference mechanisms using LPUs to provide sustainable solutions without compromising performance.
## Accomplishments that we're proud of
We're proud of creating an app that:
1. Addresses both environmental sustainability and personal well-being.
2. Empowers people to make sustainable choices in their everyday lives.
3. Provides practical tools like the Eco Shopping Assistant, DIY Assistant, and Health Reports.
4. Has the potential to create a big impact through small, collective actions.
## What we learned
Through this project, we learned that:
1. Sustainability isn't just about making eco-friendly choices; it's about making *sustainable lifestyle* choices too, focusing on personal health and well-being.
2. Small, meaningful steps taken together can lead to a big impact.
3. People have the power to change the world for the better, just as they have the power to impact it negatively.
## What's next for Sustain-ify
Moving forward, we aim to:
1. Continue developing and refining our features to better serve our users.
2. Expand our user base to increase our collective impact.
3. Potentially add more features that address other aspects of sustainability.
4. Work towards our vision of creating a sustainable future where both humans and the planet can flourish.
Together, we believe we can create a sustainable future where both humans and the planet can thrive. That's the ongoing mission of Sustain-ify, and we're excited to continue bringing this vision to life!
|
## Inspiration
Sustainability is one of the core pillars of modern progress. We wanted to address this challenge by thinking about how we could allow for substantial improvement in sustainability by optimizing an existing system. That's why we landed on LLMs: their **meteoric rise in popularity** has changed the way millions of people search for and learn information. That being said, LLMs are **extremely inefficient**when it comes to the compute required for inference. With hundreds of millions of people relying on them for day-to-day searches, it is evident that we have reached a scale where **sustainability needs to be carefully considered**. We asked ourselves, how can we make LLMs more sustainable? Can we quantify that cost so users can understand how many resources they use/save? The key to the idea is the fact that we wanted to propose a way to **dramatically improve sustainability with almost zero-effort required** from the user's side. These are the principles that make our proposal both practical and most impactful.
## What it does
In essence, we leverage **vector embeddings to make LLMs more sustainable**. Everyday, just on chatGPT, there are over 10 million queries made. Even over a small period of time, query overlap is inevitable. Currently, LLMs run inference on every single query. This is unnecessary, especially when it comes to objective queries that are similar to one another. Instead of relying on inference by default, we **rely on vector-based similarity search** first. This approximately takes **1/15 of the compute** that normal compute would take per query on chatGPT. Now, what makes LLMs desirable is their customization of responses. We didn’t want to lose this vital component by solely relying on embedded vector search. Thus, we give the user an option if they would like more information, and this defaults to a traditional LLM query. Thus, our approach allows for sustainability that is orders of magnitude higher than before, **without compromising what people like most about LLMs**.
## How we built it
For embeddings and our vector database, we used Pinecone. Our app is created with NextJS (ReactJS, TailwindCSS, NodeJS). We utilize the OpenAI api for traditional query requests. For our similarity search, we used cosine-similarity, and given that a query crosses our significance threshold, we return top 0-3 such queries for any given user input.
## Challenges we ran into
This was our first time working with embeddings and a vector database. Thus, we had some issues with setup and adding a new embedding to the overall vector space. We wanted the space to be dynamic so that answers generated for users can be shared by all users if someone were to ask a similar query in the future. Other than that, integrating all the required APIs was a challenge as some functions were async while others weren’t which caused state-update issues. Luckily, with some debugging, we were able to sort it out.
## Accomplishments that we're proud of
Our final version is a **fully-functional prototype** of our idea. We are also astonished by the real statistics behind the resources our system can potentially save. Additionally, we took UI extremely seriously because we wanted a system that was **intuitive and appealing** for users to use. We also wanted a clear way for them to see the benefit of using our platform. We believe we have accomplished this in a simple, yet capable UI experience.
## What we learned
We learned about how to use vector embeddings for similarity search. We also learned how to tweak the confidence threshold such that the relevant responses actually match the queries we are looking for. Above all else, we learned just how many resources are used in day-to-day usage of ChatGPT. When starting this project, we had a prediction about LLM resource consumption, but we completely underestimated just how large it would be. These learnings made us realize that **our project can have even more impact** than we had anticipated.
## What's next for SustainLLM
We want to take the same processes and **apply them to other modalities** like audio and image generation. These modalities require significantly more compute than text generation, and if we could save even a small percentage of that compute, it could lead to drastic results. We are aware that creativity is a pivotal part of audio and image generation, and so we would use embeddings for lower-level things such as different pixel patterns or phonetics. That way, each generation can still be unique while consuming fewer resources.
**Let’s save the environment, one LLM query at a time :)**
|
## Inspiration
As we make our transition to the sustainable future, the sources we get are food from are going to change. Farms will be closer to our towns and communities will cultivate gardening to supply hyperlocal food. HyperSmart Gardens aims to lead this transition by building sustainable gardening, providing educational content, and saving water with our smart irrigation system.
## What it does
We are designed to be a community organization that supports the agricultural climate of the communities we serve. Our first step is building residential gardens for those interested with sustainable materials. Then we'll target schools by offering agriculture curriculum and engineering workshops to show the backend operations of our smart gardening platform. We're designing our gardens from sustainable materials – ideally from waste products such as wooden shipping pallets and coffee bags. The smart irrigation system is powered by the sun with a 10W solar panel and 8Ah backup battery. The Raspberry Pi connects with our smartphone app to set a custom irrigation sequence. Future iterations of this platform will automatically water the plants based on plant type and weather conditions.
## How we built it
The raised garden bed was built with pine wood, weed cloth, and raised bed soil and assembled as a planter box. The iOS app communicated with our Firebase database to specify the irrigation settings of each user. Each night, the Raspberry Pi communicated with our database to update its local irrigation settings. A script wakes up the Pi when irrigation is starting and goes back to sleep after to conserve battery.
## Challenges we ran into
This was our first introduction to gardening so there was a large learning curve to get started. We had to learn about irrigation, soil composition, and safe wood types to supplement our technical background to pull off this build.
## Accomplishments that we're proud of
We're proud of what we were able to learn and what we could build by being resourceful. We're also happy that our technical progress is off to a good start and sets up a sound framework for future development.
## What we learned
We learned the fundamentals of gardening that we continue to hope expand to help others interested in the matter build their own sustainable gardens!
## What's next for HyperSmart Gardens
We look forward to brining our idea to the local community and start off by building smaller gardens for those interested. From the lessons we learn at this step, we hope to use that to bring our gardens to a wider audience and eventually schools.
|
winning
|
## Inspiration
Last year we had to go through the hassle of retrieving a physical key from a locked box in a hidden location in order to enter our AirBnB. After seeing the August locks, we thought there must be a more convenient alternative.
We thought of other situations where you would want to grant access to your locks. In many cases where you would want to only grant temporary access, such as AirBnB, escape rooms or visitors or contractors at a business, you would want the end user to sign an agreement before being granted access, so naturally we looked into the DocuSign API.
## What it does
The app has two pieces: a way for home owners to grant temporary access to their clients, and the way for the clients to access the locks.
The property owner fills out a simple form with the phone number of their client as a way to identify them, the address of the property, the end date of their stay, the details needed to access the August lock. Our server then generates a custom DocuSign Click form and waits for the client.
When the client access the server, they first have to agree to the DocuSign form, which is mostly our agreement, but includes details about the time and location of the access granted, and includes a section for the property owners to add their own details. Once they have agreed to the form, they are able to use our website to lock and unlock the August lock they are granted access to via the internet, until the period of access specified by the property owner ends.
## How we built it
We set up a Flask server, and made an outline of what the website would be. Then we worked on figuring out the API calls we would need to make in local python scripts. We developed the DocuSign and August pieces separately. Once the pieces were ready, we began integrating them into the Flask server. Then we worked on debugging and polishing our product.
## Challenges we ran into
Some of the API calls were complex and it was difficult figuring out which pieces of data were needed and how to format them in order to use the APIs properly. The hardest API piece to implement was programatically generating DocuSign documents. Also, debugging was difficult once we were working on the Flask server, but once we figured out how to use Flask debug mode, it became a lot easier.
## Accomplishments that we're proud of
We successfully implemented all the main pieces of our idea, including ensuring users signed via DocuSign, controlling the August lock, rejecting users after their access expires, and including both the property owner and client sides of the project.
We are also proud of the potential security of our system. The renter is given absolute minimal access. They are never given direct access to the lock info, removing potential security vulnerabilities. They login to our website, and both verification that they have permission to use the lock and the API calls to control the lock occur on our server.
## What we learned
We learned a lot about web development including how to use cookies, forms, and URL arguments. We also gained a lot of experience in implementing 3rd party API's.
## What's next for Unlocc
The next steps would be expanding the rudimentary account system with a more polished one, having a lawyer help us draft the legalese in the DocuSign documents, and contacting potential users such as AirBnB property owners or escape room companies.
|
## Inspiration
The current landscape of data aggregation for ML models relies heavily on centralized platforms, such as Roboflow and Kaggle. This causes an overreliance on invalidated human-volunteered data. Billions of dollars worth of information is unused, resulting in unnecessary inefficiencies and challenges in the data engineering process. With this in mind, we wanted to create a solution.
## What it does
**1. Data Contribution and Governance**
DAG operates as a decentralized and autonomous organization (DAO) governed by smart contracts and consensus mechanisms within a blockchain network. DAG also supports data annotation and enrichment activities, as users can participate in annotating and adding value to the shared datasets.
Annotation involves labeling, tagging, or categorizing data, which is increasingly valuable for machine learning, AI, and research purposes.
**2. Micropayments in Cryptocurrency**
In return for adding datasets to DAG, users receive micropayments in the form of cryptocurrency. These micropayments act as incentives for users to share their data with the community and ensure that contributors are compensated based on factors such as the quality and usefulness of their data.
**3. Data Quality Control**
The community of users actively participates in data validation and quality assessment. This can involve data curation, data cleaning, and verification processes. By identifying and reporting data quality issues or errors, our platform encourages everyone to actively participate in maintaining data integrity.
## How we built it
DAG was used building Next.js, MongoDB, Cohere, Tailwind CSS, Flow, React, Syro, and Soroban.
|
# VOICE
Voice is an app that helps people with disablity to speak with recognizing thier gestures giving out text and speech
* Some of the challenges faced by people with disability to speak:
1. A lack of communication medium leads to confusion
2. They cannot use the call feature.
3. When using ASL, interpreters are required. They are scarce even after the implementation of ADA and quite expensive.
4. HealthCare, court rooms pose a much different and much deadlier challenge.
5. Hospitals, schools, and healthcare organizations need to include cultural competency and humility training with authentic community representation.
# Features!
* Easy to use ,just use ASL gestures to convert it to text and speech
## Working
>
> Data was created by capturing about 1200 frames for each action
> CNN was used to train the data & Keras library was used
> Each of the action was mapped to the corresponding text
> A live stream of video is captured using OpenCV and its interpreted frame by frame
> the gesture is recognized and text is given as output
> This text is sent back to AZURE text to speech API for conversion of text to Speech.
>
>
>
### Techstack used
-Python
-OpenCV
-Keras
-Javascript
-Flask
-HTML,CSS,Materialize
-Git,Github
-Azure Cloud services(Text to speech API)
-DocuSign API for Terms & Condition related to the use os images of user.
### ScreenShots
[Screenshot 1](screen1.jpeg)
[Screenshot 2](screen2.jpeg)
[Screenshot 3](screen3.jpeg)
[Screenshot 4](screen4.jpeg)
[Screenshot 5](screen5.jpeg)
### Future Work
-converting the application into a fully functional web application(end to end)
### Developers
>
> Abhay R Patel
> Uddhav Navneeth
> Aditya Srivastava
>
>
>
|
partial
|
## What it does
Khaledifier replaces all quotes and images around the internet with pictures and quotes of DJ Khaled!
## How we built it
Chrome web app written in JS interacts with live web pages to make changes.
The app sends a quote to a server which tokenizes words into types using NLP
This server then makes a call to an Azure Machine Learning API that has been trained on DJ Khaled quotes to return the closest matching one.
## Challenges we ran into
Keeping the server running with older Python packages and for free proved to be a bit of a challenge
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
## Inspiration
As NFTs are exploding onto the scene in 2021, those unfamiliar with the technology behind them are left wondering why. We have been hearing about how Jack Dorsey sold an NFT of his first-ever tweet for over $2.9M or about Dapper Lab’s NBA Top Shot selling NFTs of NBA season moments for incredible amounts. According to DappRadar, which tracks sales across multiple blockchains, NFT sales volume surged to $2.5bn in the first half of 2021 and to $10.67bn in Q3. To understand the reasoning behind this novel concept, it requires a lot of research and technical understanding of blockchain to fully grasp what exactly is going on - and it can get overwhelming to decide where to even begin, with resources scattered and minimal interactive lessons available to learn about these different topics.
So we want to bring financial literacy about these relatively new fintech concepts to a broader audience and contribute to a more inclusive economy. To achieve this, we hacked together an interactive experience to help individuals learn about DeFi and NFTs.
## What it does
It helps individuals start learning about decentralized finance, blockchain, NFTs, and more through curated questions and interactive exercises designed to get them hands-on experience with varying concepts.
## How we built it
We built a web application on Ethereum using Solidity and Javascript. For our interactive exercise (shown in our demo), we walk users through easily creating a freshly minted NFT by deploying a smart contract on Ethereum and storing it on IPFS and Filecoin using nft.storage. We used nft.storage to upload an image file and the NFT’s metadata to IPFS, utilizing its content addressing and producing a content identifier. Then we show the CID and IPFS URI to users, allowing them to easily access their first NFT!
## Challenges we ran into
One of the biggest challenges we faced was our inexperience and lack of domain knowledge in DeFi, blockchain, and NFTs. Getting a grasp on the concepts and topics was tough in itself, but then trying to build on top of blockchain protocols was a whole other beast to tackle.
## Accomplishments that we're proud of
Honestly, just the amount of knowledge and information we have learned from researching blockchain, NFTs, Filecoin, Ethereum, and then being able to apply that knowledge to build a web app on Ethereum in such a short amount of time is a great accomplishment that we’re proud of.
## What we learned
We learned a lot about the intricacies of building on blockchain protocols as we have never done so before. More importantly, we were able to take this time to learn a lot about DeFi, how blockchain works, and where and how NFTs fit into the grand scheme of the evolving economy. What we have learned during the hackathon is by no means comprehensive, but it is a starting point to a better understanding. We hope that our educational project can demystify DeFi for beginners!
## What's next for DeFi Edu
Curating more resources and topics so people can learn more about different topics relating to DeFi and understand more about the evolving economy.
|
winning
|
## Inspiration
In the game Death Stranding, there is a similar mechanism tracking something normal people couldn’t see. Based on the theme of connectivity, we decided to develop a project that tracks the lightest position.
**Where there is light, there is hope, and there is connectivity.**
## What it does
Scan around with two motors (x-axis & y-axis respectively). After finding the location, it keeps pointing towards the best location. The rest three motors will do periodic motions(like fingers) indicating the best location.
## How we built it
Using Arduino, breadboard, servo motors, light sensor.
## Challenges we ran into
The Bluetooth shield was not working appropriately. We spent several hours trying to connect to it with our laptops so that we could communicate with it.
## Accomplishments that we're proud of
Implementing five motors and coordinating their motions. Successfully find the lightest position.
## What we learned
Arduino debugging skills.
## What's next for LightSeeker寻光计划
If possible, develop a UI interface on the laptop that could communicate with it using Bluetooth.
|
## Inspiration
Kabeer and I (Simran!) care deeply about impact, and building cool sh\*t.
He has a background in IoT and my background is in computational bio/health technologies. When thinking about what we want to create during HTN, we decided it would be most fun to find an intersection between hardware and healthcare.
We perceive up to 80% of all impressions by means of our sight, but there are 43 million people living with blindness and 295 million people living with moderate-to-severe visual impairment. O ur team wanted to build a product that could assist blind people in navigating their world with the most ease possible. Building a tool that could scan and alert the person of their surroundings was most important to us. Thus project ABEL began.
## What it does
When there is an object within 50 cm of the cane, the buzzer buzzes, alerting the user of obstacles in the way of their movement.
## How we built it
We connected the arduino to ultrasonic sensors which measure the distance to an object by measuring the time between the emission and reception. Once the arduino and ultrasonic sensors were connected, we coded it so that if the distance in cm is < 50cm, a buzzer would ring, alerting the person that there is a object nearby.
## Challenges we ran into
Time was the biggest time constraint that we had this entire weekend. The matter of a couple hours could've changed our entire next steps only if we had access to the right hardware (wifi modules!) and time to practice and run our software.
## Accomplishments that we're proud of
We are proud of ourselves for learning to use Arduino and hardware involved with it, in a short amount of time. We tought ourselves to run the IDE and worked through many small examples before we dove into our main idea.
## What we learned
Kabeer has worked with hardware, IoT and software for a couple projects in the past, but the entire hackathon this weekend was an incredible learning experience for the both of us. From using arduinos, breadboards, learning to use the mini arduino with wifi (the particle photon), hard coding our solution to failing multiple times and getting to know more on how wiring works between breadboards, this entire weekend was an amzing learning experience and we are both glad to spend time together and work on something that we were both interested in, but also was the middle of our interests
## What's next for ABEL
As we brainstormed multiple ideas days before the actual hackathon, our final idea was to take two that make sense and put them together; so in this case, it would be fall detection linked directly to an SMS/email integration to let an emergency contact of the victim know that they have fallen.
|
## The Problem 🤔
In the midst of an ecological crisis, one of Canada's largest environmental problems is being caused by one of its smallest critters - the dreaded Mountain Pine Beetle. Between 1990 and 2012, the Beetle ate its way through approximately 723 million cubic metres (53%) of all the merchantable pine in BC, and it's on the rise again today.
Technology provides an obvious solution to this problem - drones and satellites can be used to scope out large areas of forest without needing to send forest rangers out, saving timber managers time and money. The beetle can then be eradicated via targeted bursts of insecticide sprayed from drones. Yet, existing insecticide spraying drones are expensive ($18000+ CAD) and can only spray on a predefined route, which is laborious to find and program.
## The Solution 💡
Hence, we present an all in one system for combatting this epidemic, comprising:
* Convolutional Neural Network model for detecting standing dead pines from satellite imagery, to enable automatic targeting of regions suffering from the beetle.
* Drone attachment for available hobby drones (cheap and could be "borrowed" by national park service when not in use by local citizens) to enable spraying of trees on the periphery of an affected area with insecticide to create a "firebreak".
* Web server with 3D globe for viewing the drone's progress in eradicating the beetle
The satellite imagery dataset could easily be replaced with data collected from the drone camera itself when moving to production, and the model retrained in a few minutes, though we weren't able to create this dataset ourselves due to not living in Canada.
## How we built it 🤖
A [BBC micro:bit](https://en.wikipedia.org/wiki/Micro_Bit) is mounted to the drone and attached to a servo motor used to release the insecticide. The micro:bit uses standard 2.4GHz radio to communicate with another micro:bit on the ground, using protocol buffers as the communication medium. We use protocol buffers as they enable us to send a large amount of data efficiently, in the small amount of bytes available over the radio.
The second micro:bit uses serial to communicate with the computer running the web server that displays the status of the project. A WebGL Earth 3D globe with Mapbox satellite tiles is used to depict where the drone is and the area it needs to cover. We use an API request to feed this data into the JS, after the server decodes it from protobuf.

We tested several machine learning models on satellite imagery and found that a CNN was the most effective model.
## Challenges we ran into 😬
* Trying to get protobufs to work on the micro:bit which has only 16 KiB of usable RAM. To solve this problem we developed a custom protocol buffer implementation for just the fields we need to send to load onto the micro:bit, and interface with standard protocol buffers code running on the server computer.
+ Learning about different architectures in Tensorflow and fine tuning them.
+ Building a hardware project remotely with only one team member having access to the hardware, and getting effective drone footage for the video!
## Accomplishments that we're proud of 🏆
* Having the confidence to try a challenging project using technology we hadn't used before in a hackathon, given that we weren't sure it was going to work.
* Integrating protocol buffers into the micro:bit via a custom library!
* Achieving 89% accuracy on the binary classifier model used to detect standing dead trees.
## What we learned 📚
* Protocol Buffers serialisation format, use of proto files, use of `buf lint` and `protoc`.
The buf CLI was invaluable in ensuring that our proto schema was correct, efficient and designed with backwards compatibility in mind.
* How to collect a good dataset for machine learning and tune hyperparameters to get good results (full pipeline for a real world scenario, when you haven't been given a pre-built dataset).
* Controlling a servo motor with a micro:bit.
## What's next for Pine Protection 🔮
To be scaled up the project would need to be supported by forest owners. The model could easily be retrained to deal with drone photos to ensure that the precision is better than you would get from satellites, then the product would need to be mass-manufactured as a small PCB which could be added onto commercially available hobby drones.
|
losing
|
## Inspiration
We want to make everyone impressed by our amazing project! We wanted to create a revolutionary tool for image identification!
## What it does
It will identify any pictures that are uploaded and describe them.
## How we built it
We built this project with tons of sweats and tears. We used Google Vision API, Bootstrap, CSS, JavaScript and HTML.
## Challenges we ran into
We couldn't find a way to use the key of the API. We couldn't link our html files with the stylesheet and the JavaScript file. We didn't know how to add drag and drop functionality. We couldn't figure out how to use the API in our backend. Editing the video with a new video editing app. We had to watch a lot of tutorials.
## Accomplishments that we're proud of
The whole program works (backend and frontend). We're glad that we'll be able to make a change to the world!
## What we learned
We learned that Bootstrap 5 doesn't use jQuery anymore (the hard way). :'(
## What's next for Scanspect
The drag and drop function for uploading iamges!
|
## Inspiration
Ever wish you didn’t need to purchase a stylus to handwrite your digital notes? Each person at some point hasn’t had the free hands to touch their keyboard. Whether you are a student learning to type or a parent juggling many tasks, sometimes a keyboard and stylus are not accessible. We believe the future of technology won’t even need to touch anything in order to take notes. HoverTouch utilizes touchless drawings and converts your (finger)written notes to typed text! We also have a text to speech function that is Google adjacent.
## What it does
Using your index finger as a touchless stylus, you can write new words and undo previous strokes, similar to features on popular note-taking apps like Goodnotes and OneNote. As a result, users can eat a slice of pizza or hold another device in hand while achieving their goal. HoverTouch tackles efficiency, convenience, and retention all in one.
## How we built it
Our pre-trained model from media pipe works in tandem with an Arduino nano, flex sensors, and resistors to track your index finger’s drawings. Once complete, you can tap your pinky to your thumb and HoverTouch captures a screenshot of your notes as a JPG. Afterward, the JPG undergoes a masking process where it is converted to a black and white picture. The blue ink (from the user’s pen strokes) becomes black and all other components of the screenshot such as the background become white. With our game-changing Google Cloud Vision API, custom ML model, and vertex AI vision, it reads the API and converts your text to be displayed on our web browser application.
## Challenges we ran into
Given that this was our first hackathon, we had to make many decisions regarding feasibility of our ideas and researching ways to implement them. In addition, this entire event has been an ongoing learning process where we have felt so many emotions — confusion, frustration, and excitement. This truly tested our grit but we persevered by uplifting one another’s spirits, recognizing our strengths, and helping each other out wherever we could.
One challenge we faced was importing the Google Cloud Vision API. For example, we learned that we were misusing the terminal and our disorganized downloads made it difficult to integrate the software with our backend components. Secondly, while developing the hand tracking system, we struggled with producing functional Python lists. We wanted to make line strokes when the index finger traced thin air, but we eventually transitioned to using dots instead to achieve the same outcome.
## Accomplishments that we're proud of
Ultimately, we are proud to have a working prototype that combines high-level knowledge and a solution with significance to the real world. Imagine how many students, parents, friends, in settings like your home, classroom, and workplace could benefit from HoverTouch's hands free writing technology.
This was the first hackathon for ¾ of our team, so we are thrilled to have undergone a time-bounded competition and all the stages of software development (ideation, designing, prototyping, etc.) toward a final product. We worked with many cutting-edge softwares and hardwares despite having zero experience before the hackathon.
In terms of technicals, we were able to develop varying thickness of the pen strokes based on the pressure of the index finger. This means you could write in a calligraphy style and it would be translated from image to text in the same manner.
## What we learned
This past weekend we learned that our **collaborative** efforts led to the best outcomes as our teamwork motivated us to preserve even in the face of adversity. Our continued **curiosity** led to novel ideas and encouraged new ways of thinking given our vastly different skill sets.
## What's next for HoverTouch
In the short term, we would like to develop shape recognition. This is similar to Goodnotes feature where a hand-drawn square or circle automatically corrects to perfection.
In the long term, we want to integrate our software into web-conferencing applications like Zoom. We initially tried to do this using WebRTC, something we were unfamiliar with, but the Zoom SDK had many complexities that were beyond our scope of knowledge and exceeded the amount of time we could spend on this stage.
### [HoverTouch Website](hoverpoggers.tech)
|
## Inspiration
Our team’s mission is to build a tool that alleviates stress on Canadians during the hefty tax season.
With Canadians spending over 7 hours to complete their tax returns and over $5 billion dollars to cover personal income compliance costs, we decided to come up with a solution to help Canadians save time and money. We created TaxEasy as a web application that uses machine learning to generate a tax return file based on your tax slips! With TaxEasy, Canadians don’t need to understand the complications involved with taxes to file their tax returns. All they need to do is upload their tax slips and TaxEasy will do the rest.
While filing taxes only occurs once a month, it is a gruelling task that takes up time and money. We built TaxEasy in hopes of making Canadians’ lives easier so that they can use their saved time to explore their interests and spend time with their loved ones.
## What it does
TaxEasy is a web application that simplifies the process of completing a tax return as it generates a tax return file for Canadians by taking the information given on tax slips.
Using optical character recognition (OCR), TaxEasy recognizes specific categories in the uploaded tax slips and fills out the tax return form accordingly. For instance, when scanning the T4 form, TaxEasy looks for the “Employment Income” box and inserts the given value into the tax return form’s section for Employment Income. This is all done with a simple click of a button. Users only need to upload their tax slips for this process to occur.
## How we built it
We used Microsoft Azure’s Optical Character Recognition (OCR) API for our machine learning implementation. This API was used to train 6 models to recognize the distinct categories present in the following tax slips: T4, T4A, T4A(OAS), T4AP, T1032, and T4E. During the training process, we used supervised learning by creating a labelled training set. We assigned labels based on the information needed on a tax return form. For instance, a tax return form requires an individual’s Employment Income on their T4. Thus, we trained our model to identify where that is on a T4 based on our labels. Moreover, we used Pandas, a Python library, to store the tax return data into a csv-file which was then used to fill in a blank tax return form. For our front-end we used HTML, CSS, Bootstrap, and Python Flask to ensure responsiveness and the smooth integration between our front-end and back-end.
## Challenges we ran into
The biggest challenge was the learning curve for us. Having never used Python Flask and Microsoft Azure’s APIs, we spent the majority of our first day understanding the basics of each technology. This meant diving deep into YouTube videos and documentation reading. Once we gained an understanding of the technologies, we were ready to start our project! However, we were faced with the challenge of obtaining a dataset of tax slips. To overcome this, we decided to create our own tax slips using the files provided by the CRA. In order to maintain consistency with realistic tax slips, we used our own tax slips as reference. Overall, the challenges we had were overcome with persistence and creativity which were powered by our desire to learn.
## Accomplishments that we're proud of
Starting the project, we were not confident that we could complete it within the timeframe since we were both going out of our comfort zones to learn new concepts. Thus, completing the project is an accomplishment in itself because it demonstrates our passion for learning new things. Moreover, we are proud to have created an application that can have an impact on Canadians. With time being more precious than ever, we’ve enabled Canadians to spend more of that time towards their own wellbeing. Overall, we’re extremely proud that we were able to learn new skills and make an impact.
## What we learned
With no experience with APIs, we learned how to use Microsoft Azure’s OCR and Storage APIs in order to create a machine learning implementation to recognize the different structures given in tax slips. During this process, we got first-hand experience with supervised learning by having to label our data to increase our model accuracy. Moreover, we learned how to use Python to convert data into a csv-file in order to fill out a blank PDF file. On the front-end, we learned how to use Flask by leveraging its HTTP methods to allow for a smooth integration with our backend.
## What's next for TaxEasy
For the future, we plan to implement a questionnaire feature that will allow users to input information that cannot be gathered from tax slips, such as email, birthdate, and etc… Moreover, we want to enhance our machine learning model by training it on a larger set of tax slips. We decided to only train our models over 6 tax slips due to the limited timeframe and the need to deliver a working product.
|
winning
|
## Inspiration
Epidemiology is critical in figuring out how to stop the spread of diseases before it’s too late
## What it does
ClassiFly uses image data to classify individuals with known disease symptoms. For demonstration purposes, we selected Yellow Fever and Methicillin-resistant Staphylococcus aureus, and Eelephantiasis.
## How I built it
The app was developed in Swift, and the classification model was trained using a split data classifier method, which leveraged Apple's native CreateMLUI framework to build an image classifier model with 89% accuracy.
## Challenges We ran into
We initially planned on building an autonomous drone for tracking that could be used to identifying certain key epidemiological characteristics in a medically unsafe and infected region. That is, this would effectively increase accessibility to remote areas that are susceptible to infection. However, there was no clear way to interface with the drone via an API so we decided to simply build a classification app that would allow you to use a drone image taken in a contaminated area and derive certain key epidemiological insights.
## Accomplishments that I'm proud of
I am proud that we were able to successfully work together efficiently to build an image classification app.
## What I learned
We learned how we navigate managing development projects as a team, as well as how to leverage really powerful computer vision capabilities with CoreML.
## What's next for ClassiFly
What if we could have an army of medical detectives in the sky, able to reach the most remote populations? Briefly: Navigate to remote areas, collect image data of populus, use Machine Learning to classify afflictions based on visible symptoms. This paints a better picture of the disease landscape much faster than any human observation.
|
## Inspiration
Our inspiration for this project was the technological and communication gap between healthcare professionals and patients, restricted access to both one’s own health data and physicians, misdiagnosis due to lack of historical information, as well as rising demand in distance-healthcare due to the lack of physicians in rural areas and increasing patient medical home practices. Time is of the essence in the field of medicine, and we hope to save time, energy, money and empower self-care for both healthcare professionals and patients by automating standard vitals measurement, providing simple data visualization and communication channel.
## What it does
What eVital does is that it gets up-to-date daily data about our vitals from wearable technology and mobile health and sends that data to our family doctors, practitioners or caregivers so that they can monitor our health. eVital also allows for seamless communication and monitoring by allowing doctors to assign tasks and prescriptions and to monitor these through the app.
## How we built it
We built the app on iOS using data from the health kit API which leverages data from apple watch and the health app. The languages and technologies that we used to create this are MongoDB Atlas, React Native, Node.js, Azure, Tensor Flow, and Python (for a bit of Machine Learning).
## Challenges we ran into
The challenges we ran into are the following:
1) We had difficulty narrowing down the scope of our idea due to constraints like data-privacy laws, and the vast possibilities of the healthcare field.
2) Deploying using Azure
3) Having to use Vanilla React Native installation
## Accomplishments that we're proud of
We are very proud of the fact that we were able to bring our vision to life, even though in hindsight the scope of our project is very large. We are really happy with how much work we were able to complete given the scope and the time that we have. We are also proud that our idea is not only cool but it actually solves a real-life problem that we can work on in the long-term.
## What we learned
We learned how to manage time (or how to do it better next time). We learned a lot about the health care industry and what are the missing gaps in terms of pain points and possible technological intervention. We learned how to improve our cross-functional teamwork, since we are a team of 1 Designer, 1 Product Manager, 1 Back-End developer, 1 Front-End developer, and 1 Machine Learning Specialist.
## What's next for eVital
Our next steps are the following:
1) We want to be able to implement real-time updates for both doctors and patients.
2) We want to be able to integrate machine learning into the app for automated medical alerts.
3) Add more data visualization and data analytics.
4) Adding a functional log-in
5) Adding functionality for different user types aside from doctors and patients. (caregivers, parents etc)
6) We want to put push notifications for patients' tasks for better monitoring.
|
## Inspiration
Recognizing the disastrous effects of the auto industry on the environment, our team wanted to find a way to help the average consumer mitigate the effects of automobiles on global climate change. We felt that there was an untapped potential to create a tool that helps people visualize cars' eco-friendliness, and also helps them pick a vehicle that is right for them.
## What it does
CarChart is an eco-focused consumer tool which is designed to allow a consumer to make an informed decision when it comes to purchasing a car. However, this tool is also designed to measure the environmental impact that a consumer would incur as a result of purchasing a vehicle. With this tool, a customer can make an auto purhcase that both works for them, and the environment. This tool allows you to search by any combination of ranges including Year, Price, Seats, Engine Power, CO2 Emissions, Body type of the car, and fuel type of the car. In addition to this, it provides a nice visualization so that the consumer can compare the pros and cons of two different variables on a graph.
## How we built it
We started out by webscraping to gather and sanitize all of the datapoints needed for our visualization. This scraping was done in Python and we stored our data in a Google Cloud-hosted MySQL database. Our web app is built on the Django web framework, with Javascript and P5.js (along with CSS) powering the graphics. The Django site is also hosted in Google Cloud.
## Challenges we ran into
Collectively, the team ran into many problems throughout the weekend. Finding and scraping data proved to be much more difficult than expected since we could not find an appropriate API for our needs, and it took an extremely long time to correctly sanitize and save all of the data in our database, which also led to problems along the way.
Another large issue that we ran into was getting our App Engine to talk with our own database. Unfortunately, since our database requires a white-listed IP, and we were using Google's App Engine (which does not allow static IPs), we spent a lot of time with the Google Cloud engineers debugging our code.
The last challenge that we ran into was getting our front-end to play nicely with our backend code
## Accomplishments that we're proud of
We're proud of the fact that we were able to host a comprehensive database on the Google Cloud platform, in spite of the fact that no one in our group had Google Cloud experience. We are also proud of the fact that we were able to accomplish 90+% the goal we set out to do without the use of any APIs.
## What We learned
Our collaboration on this project necessitated a comprehensive review of git and the shared pain of having to integrate many moving parts into the same project. We learned how to utilize Google's App Engine and utilize Google's MySQL server.
## What's next for CarChart
We would like to expand the front-end to have even more functionality
Some of the features that we would like to include would be:
* Letting users pick lists of cars that they are interested and compare
* Displaying each datapoint with an image of the car
* Adding even more dimensions that the user is allowed to search by
## Check the Project out here!!
<https://pennapps-xx-252216.appspot.com/>
|
partial
|
## Inspiration
We love cooking and watching food videos. From the Great British Baking Show to Instagram reels, we are foodies in every way. However, with the 119 billion pounds of food that is wasted annually in the United States, we wanted to create a simple way to reduce waste and try out new recipes.
## What it does
lettuce enables users to create a food inventory using a mobile receipt scanner. It then alerts users when a product approaches its expiration date and prompts them to notify their network if they possess excess food they won't consume in time. In such cases, lettuce notifies fellow users in their network that they can collect the surplus item. Moreover, lettuce offers recipe exploration and automatically checks your pantry and any other food shared by your network's users before suggesting new purchases.
## How we built it
lettuce uses React and Bootstrap for its frontend and uses Firebase for the database, which stores information on all the different foods users have stored in their pantry. We also use a pre-trained image-to-text neural network that enables users to inventory their food by scanning their grocery receipts. We also developed an algorithm to parse receipt text to extract just the food from the receipt.
## Challenges we ran into
One big challenge was finding a way to map the receipt food text to the actual food item. Receipts often use annoying abbreviations for food, and we had to find databases that allow us to map the receipt item to the food item.
## Accomplishments that we're proud of
lettuce has a lot of work ahead of it, but we are proud of our idea and teamwork to create an initial prototype of an app that may contribute to something meaningful to us and the world at large.
## What we learned
We learned that there are many things to account for when it comes to sustainability, as we must balance accessibility and convenience with efficiency and efficacy. Not having food waste would be great, but it's not easy to finish everything in your pantry, and we hope that our app can help find a balance between the two.
## What's next for lettuce
We hope to improve our recipe suggestion algorithm as well as the estimates for when food expires. For example, a green banana will have a different expiration date compared to a ripe banana, and our scanner has a universal deadline for all bananas.
|
## Inspiration
In the work from home era, many are missing the social aspect of in-person work. And what time of the workday most provided that social interaction? The lunch break. culina aims to bring back the social aspect to work from home lunches. Furthermore, it helps users reduce their food waste by encouraging the use of food that could otherwise be discarded and diversifying their palette by exposing them to international cuisine (that uses food they already have on hand)!
## What it does
First, users input the groceries they have on hand. When another user is found with a similar pantry, the two are matched up and displayed a list of healthy, quick recipes that make use of their mutual ingredients. Then, they can use our built-in chat feature to choose a recipe and coordinate the means by which they want to remotely enjoy their meal together.
## How we built it
The frontend was built using React.js, with all CSS styling, icons, and animation made entirely by us. The backend is a Flask server. Both a RESTful API (for user creation) and WebSockets (for matching and chatting) are used to communicate between the client and server. Users are stored in MongoDB. The full app is hosted on a Google App Engine flex instance and our database is hosted on MongoDB Atlas also through Google Cloud. We created our own recipe dataset by filtering and cleaning an existing one using Pandas, as well as scraping the image URLs that correspond to each recipe.
## Challenges we ran into
We found it challenging to implement the matching system, especially coordinating client state using WebSockets. It was also difficult to scrape a set of images for the dataset. Some of our team members also overcame technical roadblocks on their machines so they had to think outside the box for solutions.
## Accomplishments that we're proud of.
We are proud to have a working demo of such a complex application with many moving parts – and one that has impacts across many areas. We are also particularly proud of the design and branding of our project (the landing page is gorgeous 😍 props to David!) Furthermore, we are proud of the novel dataset that we created for our application.
## What we learned
Each member of the team was exposed to new things throughout the development of culina. Yu Lu was very unfamiliar with anything web-dev related, so this hack allowed her to learn some basics of frontend, as well as explore image crawling techniques. For Camilla and David, React was a new skill for them to learn and this hackathon improved their styling techniques using CSS. David also learned more about how to make beautiful animations. Josh had never implemented a chat feature before, and gained experience teaching web development and managing full-stack application development with multiple collaborators.
## What's next for culina
Future plans for the website include adding videochat component to so users don't need to leave our platform. To revolutionize the dating world, we would also like to allow users to decide if they are interested in using culina as a virtual dating app to find love while cooking. We would also be interested in implementing organization-level management to make it easier for companies to provide this as a service to their employees only. Lastly, the ability to decline a match would be a nice quality-of-life addition.
|
## Inspiration
1 in 8 women will be diagnosed with breast cancer, but 90% who are diagnosed early survive over 5 years. However, breast cancer victims often realize they have breast cancer when it is too late, costing them thousands. About 40% of diagnosed breast cancers are self-detected in its early stages with a routine self-exam, and Norma will help add to that number.
## What it does
Norma is an interactive chat bot aimed at helping to detect breast cancer at an early stage through guidance and education. Norma guides users through a self breast examination, allows users to keep track of their progression with a journal, and provides contact information to a doctor.
## How we built it
* Designed using Sketch
* Used Houndify used to create chat bot
* Google Firebase used to store journal logs
## Challenges we ran into
* Understanding API Documentation
* Storyboard merge issues
## Accomplishments that we're proud of
* Design
* User experience
* Color palette
## What we learned
* Learned how to design with Sketch
* Work as a team
* Learned more about auto-layout and layers
## What's next for Norma
* Continuing development after Treehacks
* Incorporate real doctors
* Integrating patient data with medical records
|
winning
|
## Inspiration
There are two types of pets wandering unsupervised in the streets - ones that are lost and ones that don't have a home to go to. Pet's Palace portable mini-shelters services these animals and connects them to necessary services while leveraging the power of IoT.
## What it does
The units are placed in streets and alleyways. As an animal approaches the unit, an ultrasonic sensor triggers the door to open and dispenses a pellet of food. Once inside, a live stream of the interior of the unit is sent to local animal shelters which they can then analyze and dispatch representatives accordingly. Backend analysis of the footage provides information on breed identification, custom trained data, and uploads an image to a lost and found database. A chatbot is implemented between the unit and a responder at the animal shelter.
## How we built it
Several Arduino microcontrollers distribute the hardware tasks within the unit with the aide of a wifi chip coded in Python. IBM Watson powers machine learning analysis of the video content generated in the interior of the unit. The adoption agency views the live stream and related data from a web interface coded with Javascript.
## Challenges we ran into
Integrating the various technologies/endpoints with one Firebase backend.
## Accomplishments that we're proud of
A fully functional prototype!
|
## Inspiration
After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants.
## What it does
Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants.
## How we built it + Back-end:
The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed.
[backend explanation here]
### Front-end:
The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players.
## Challenges we ran into
We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database
## Accomplishments that we're proud of
We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors.
## What's next for Poképlants
* Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard
* Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help
* Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
|
## Inspiration
For any hospital, the key to it running smoothly is to follow the set orderly system. But, as too many patients know, there are times when a should be orderly hospital is thrown into chaos. Patients would often need to ask medical staff several times for their medical documents and require assistance getting them in order. This often results in stays at hospitals hours longer than necessary to complete the paperwork.
The burden of answering patients and their families' requests often falls to the nurses, and an already essential, often overlooked role in the hospital workflow. Interruptions of the nursing staff is an issue that is not given priority, though it can have disastrous consequences on the running of the hospital.
Interruptions have adverse effects on a nurse’s memory; therefore, interruptions take focus away from the current task. Frequent interruptions can tax a nurse’s cognition load developing a higher risk of committing human error in critical medical procedures.
Many communication interruptions could have been mitigated through more accessible information systems between patients and their medical staff. This has shown the importance to manage and lessen non-urgent communication as much as possible.
@sclepius seeks to bridge the communication barrier between medical professionals, patients, and their families by streamlining the transfer of medical information securely and in a way that gives patients 100% control over who has access to their medical information and how much.
## What it does
@sclepius is a medical application to be used by both patients and doctors that allows for the transfer of vital data between the two. The app keeps track of and organizes all of your medical documents that a doctor or medical professional may need. The medical professional can then send recent medical test results and forms such as prescriptions or clinical notes directly to your app for your convenience. If authorized by the patient, then select information can also be sent to family accounts to view.
## How we built it
Our application is a Flutter application coded in Dart. The flutter app UI allows the user to access the data in their profile. @protocol is used to request access and transfer, and receive documents and medical information. The communication flow is summarized in an image in the header:
## Challenges we ran into
* We had to switch projects midway through, initially; we wanted to create an open-source drug development platform,
* Setting up the environment for Flutter, alongside an Android phone emulator to test the app.
* Being a pioneer in working @protocol API.
## Accomplishments that we're proud of
```
- Managed to put the product together in the short span of the hackathon
- We were able to create an outstanding project despite the limitations of meetings
- We are proud of adapting to so many unfamiliar technologies( see challenges section) in such a short period of time.
```
## What we learned
To properly utilize the @protocol API, we also needed to learn to code flutter apps using the Dart programming language. Given how new the technology of the @protocol API we all needed to learn it on the fly. Finally, once we conceptualized the idea, we needed to do in-depth research into hospital-patient communication issues to pinpoint where our app could help solve the communication gaps.
## What's next for @sclepius
```
- More granularity for access levels of medical information
- Adding features to ease the input of information such as scanning paper
- Testing the functionality and usage of the app in a natural hospital setting
```
|
winning
|
# Babble: PennAppsXVIII
PennApps Project Fall 2018
## Babble: Offline, Self-Propagating Messaging for Low-Connectivity Areas
Babble is the world's first and only chat platform that is able to be installed, setup, and used 100% offline. This platform has a wide variety of use cases such as use in communities with limited internet access like North Korea, Cuba, and Somalia. Additionally, this platform would be able to maintain communications in disaster situations where internet infrastructure is damaged or sabotaged. ex. Warzones, Natural Disasters, etc.
### Demo Video
See our project in action here: <http://bit.ly/BabbleDemo>
[](http://www.youtube.com/watch?v=M5dz9_pf2pU)
## Offline Install & Setup
Babble (a zipped APK) is able to be sent from one user to another via Android Beam. From there it is able to be installed. This allows any user to install the app just by tapping their phone to that of another user. This can be done 100% offline.
## Offline Send
All Babble users connect to all nearby devices via the creation of a localized mesh network created using the Android Nearby Connections API. This allows for messages to be sent directly from device to device via m to n peer to peer as well as messages to be daisy chain sent from peer to peer to ... to peer to peer.
Each Babble user's device keeps a localized ledger of all messages that it has sent and received, as well as an amalgamation of all of the ledgers of every device that this instance of Babble has been connected directly to via Android Nearby.
The combination of the Android Nearby Connections API with this decentralized, distributed ledger allows for messages to propagate across mesh networks and move between isolated networks as users leave one mesh network and join another.
## Cloud Sync when Online
Whenever an instance of Babble gains internet access, it uploads a copy of its ledger to a MongoDB Atlas Cluster running on Google Cloud. There the local ledger is amalgamated with the global ledger which contains all messages sent world wide. From there the local copy of the ledger is updated from the global copy to contain messages for nearby users.
## Use Cases
### Internet Infrastructure Failure: Natural Disaster
Imagine a natural disaster situation where large scale internet infrastructure is destroyed or otherwise not working correctly. Only a small number of users of the app would be able to distribute the app to all those affected by the outage and allow them to communicate with loved ones and emergency services. Additionally, this would provide a platform by which emergency services would be able to issue public alerts to the entire mesh network.
### Untraceable and Unrestrictable Communication in North Korea
One of the future directions we would like to take this would be a Ethereum-esq blockchain based ledger. This would allow for 100% secure, private, and untraceable messaging. Additionally the Android Nearby Connections API is able to communicate between devices via, cellular network, Wifi, Bluetooth, NFC, and Ultrasound which makes our messages relatively immune to jamming. With the mesh network, it would be difficult to block messaging on a large scale.
As a result of this feature set, Babble would be a perfect app to allow for open and unobstructed, censored, or otherwise unrestricted communication inside of a country with heavily restricted internet access like North Korea.
### Allowing Cubans to Communicate with Family and Friends in the US
Take a use case of a Cuba wide roll out. There will be a limited number of users in large cities like Havana or Santiago de Cuba that will have internet access as well as a number of users distributed across the country who will have occasional internet access. Through both the offline send and the cloud sync, 100% offline users in cuba would be able to communicate with family stateside.
## Future Goals and Directions
Our future goals would be to build better stability and more features such as image and file sharing, emergency messaging, integration with emergency services and the 911 decision tree, end to end encryption, better ledger management, and conversion of ledger to Ethereum-esq anonymized blockchain to allow for 100% secure, private, and untraceable messaging.
Ultimately, the most insane use of our platform would be as a method for rolling out low bandwidth internet to the offline world.
Name creds go to Chris Choi
|
**In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.**
## Inspiration
Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief.
## What it does
**Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter.
## How we built it
We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community.
## Challenges we ran into
Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon.
## Accomplishments that we're proud of
We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives.
## What we learned
We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs.
## What's next for Stronger Together
We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations.
|
## Inspiration
We believe that every person should be able to experience an urban lifestyle to their heart's content without worrying for their safety, so we built ListenHere to empower this vision.
## What It Does
ListenHere is a mobile-compatible web application that listens for intervals of sounds and predicts, based on a machine learning model, what sounds are being heard. This way, the user can in real time identify important sounds for **safety** (car horns, gunshot), **ambiance** (construction, street music), and **engagement** (dog bark, kids playing).
## How We Built It
ListenHere is based on a **Support Vector Machine (SVM)** model trained on an open-source urban sounds dataset. We started by conducting feature extraction on our dataset. To understand the critical features of our audio waveform input, we used the Librosa library for Python to conduct **Mel-Frequency Cepstrum Coefficients (MFCC)** analysis. This is essentially looking at the "spectrum of the spectrum" of audio, which are the relevant features for classification of sounds. Then, we normalized our data and conducted **Principal Component Analysis (PCA)** to reduce our feature space to include only the most distinguishable ones. This is basically how we are stripping background noise from our audio data. Next, we use a support vector machine to classify files by sound.
After training our model, we built our backend in Python's flask library, as well as our front-end in JS. The flask instance accesses the trained model as well as pre-fit standardization measures for the scaling and PCA to prepare each audio file for classification.
## Challenges We Ran Into
We worked on several different classification approaches before settling on SVMs. It was tough to implement each of these because they all relied on different types of input data and so not only did we have to write different models but we went through different means of feature extraction. Moreover, we did not have much experience working with front-end and thus went through various attempts ranging from iOS development to React before finding a system that worked well with our concept and existing back-end in Flask.
## Accomplishments That We're Proud Of
**96.4% accuracy** on validation set. Our cutting-edge artificial intelligence is what makes our platform viable.
## What We Learned
We learned a lot about manipulating audio data and how sounds can be distinguished from one another using properties of the waveform. Also, we learned how principal component analysis can be used to eliminate background noise and increase the efficiency of our model. Finally, we developed some front-end chops and learned how to take AI models outside of a scientific context and apply them with a user interface.
## What's Next For ListenHere
Eventually, we'd like to train our model with more types of sounds, but first, we want to build a WatchOS application that uses our model to serve as an even more convenient urban experience platform. This would focus specifically on the safety aspects of notifying users when certain sounds are identified, such as a siren, horn, or gunshot. We've developed our back-end with this in mind so that any adjustments to the model and user interface can be incorporated into the platform in a streamlined fashion.
|
winning
|
## Inspiration
Fall lab and design bay cleanout leads to some pretty interesting things being put out at the free tables. In this case, we were drawn in by a motorized Audi Spyder car. And then, we saw the Neurosity Crown headsets, and an idea was born. A single late night call among team members, excited about the possibility of using a kiddy car for something bigger was all it took. Why can't we learn about cool tech and have fun while we're at it?
Spyder is a way we can control cars with our minds. Use cases include remote rescue, non able-bodied individuals, warehouse, and being extremely cool.
## What it does
Spyder uses the Neurosity Crown to take the brainwaves of an individual, train an AI model to detect and identify certain brainwave patterns, and output them as a recognizable output to humans. It's a dry brain-computer interface (BCI) which means electrodes are placed against the scalp to read the brain's electrical activity. By taking advantage of these non-invasive method of reading electrical impulses, this allows for greater accessibility to neural technology.
Collecting these impulses, we are then able to forward these commands to our Viam interface. Viam is a software platform that allows you to easily put together smart machines and robotic projects. It completely changed the way we coded this hackathon. We used it to integrate every single piece of hardware on the car. More about this below! :)
## How we built it
### Mechanical
The manual steering had to be converted to automatic. We did this in SolidWorks by creating a custom 3D printed rack and pinion steering mechanism with a motor mount that was mounted to the existing steering bracket. Custom gear sizing was used for the rack and pinion due to load-bearing constraints. This allows us to command it with a DC motor via Viam and turn the wheel of the car, while maintaining the aesthetics of the steering wheel.
### Hardware
A 12V battery is connected to a custom soldered power distribution board. This powers the car, the boards, and the steering motor. For the DC motors, they are connected to a Cytron motor controller that supplies 10A to both the drive and steering motors via pulse-width modulation (PWM).
A custom LED controller and buck converter PCB stepped down the voltage from 12V to 5V for the LED under glow lights and the Raspberry Pi 4. The Raspberry Pi 4 uses the Viam SDK (which controls all peripherals) and connects to the Neurosity Crown for vision software controlling for the motors. All the wiring is custom soldered, and many parts are custom to fit our needs.
### Software
Viam was an integral part of our software development and hardware bringup. It significantly reduced the amount of code, testing, and general pain we'd normally go through creating smart machine or robotics projects. Viam was instrumental in debugging and testing to see if our system was even viable and to quickly check for bugs. The ability to test features without writing drivers or custom code saved us a lot of time. An exciting feature was how we could take code from Viam and merge it with a Go backend which is normally very difficult to do. Being able to integrate with Go was very cool - usually have to do python (flask + SDK). Being able to use Go, we get extra backend benefits without the headache of integration!
Additional software that we used was python for the keyboard control client, testing, and validation of mechanical and electrical hardware. We also used JavaScript and node to access the Neurosity Crown, Neurosity SDK and Kinesis API to grab trained AI signals from the console. We then used websockets to port them over to the Raspberry Pi to be used in driving the car.
## Challenges we ran into
Using the Neurosity Crown was the most challenging. Training the AI model to recognize a user's brainwaves and associate them with actions didn't always work. In addition, grabbing this data for more than one action per session was not possible which made controlling the car difficult as we couldn't fully realise our dream.
Additionally, it only caught fire once - which we consider to be a personal best. If anything, we created the world's fastest smoke machine.
## Accomplishments that we're proud of
We are proud of being able to complete a full mechatronics system within our 32 hours. We iterated through the engineering design process several times, pivoting multiple times to best suit our hardware availabilities and quickly making decisions to make sure we'd finish everything on time. It's a technically challenging project - diving into learning about neurotechnology and combining it with a new platform - Viam, to create something fun and useful.
## What we learned
Cars are really cool! Turns out we can do more than we thought with a simple kid car.
Viam is really cool! We learned through their workshop that we can easily attach peripherals to boards, use and train computer vision models, and even use SLAM! We spend so much time in class writing drivers, interfaces, and code for peripherals in robotics projects, but Viam has it covered. We were really excited to have had the chance to try it out!
Neurotech is really cool! Being able to try out technology that normally isn’t available or difficult to acquire and learn something completely new was a great experience.
## What's next for Spyder
* Backflipping car + wheelies
* Fully integrating the Viam CV for human safety concerning reaction time
* Integrating Adhawk glasses and other sensors to help determine user focus and control
|
## Inspiration
Ethiscan was inspired by a fellow member of our Computer Science club here at Chapman who was looking for a way to drive social change and promote ethical consumerism.
## What it does
Ethiscan reads a barcode from a product and looks up the manufacturer and information about the company to provide consumers with information about the product they are buying and how the company impacts the environment and society as a whole. The information includes the parent company of the product, general information about the parent company, articles related to the company, and an Ethics Score between 0 and 100 giving a general idea of the nature of the company. This Ethics Score is created by using Sentiment Analysis on Web Scraped news articles, social media posts, and general information relating to the ethical nature of the company.
## How we built it
Our program is two parts. We built an android application using Android Studio which takes images of a barcode on a product and send that to our server. Our server processes the UPC (Universal Product Code) unique to each barcode and uses a sentiment analysis neural network and web scraping to populate the android client with relevant information related to the product's parent company and ethical information.
## Challenges we ran into
Android apps are significantly harder to develop than expected, especially when nobody on your team has any experience. Alongside this we ran into significant issues finding databases of product codes, parent/subsidiary relations, and relevant sentiment data.
The Android App development process was significantly more challenging than we anticipated. It took a lot of time and effort to create functioning parts of our application. Along with that, web scraping and sentiment analysis are precise and diligent tasks to accomplish. Given the time restraint, the accuracy of the Ethics Score is not as accurate as possible. Finally, not all barcodes will return accurate results simply due to the lack of relevant information online about the ethical actions of companies related to products.
## Accomplishments that we're proud of
We managed to load the computer vision into our original android app to read barcodes on a Pixel 6, proving we had a successful proof of concept app. While our scope was ambitious, we were able to successfully show that the server-side sentiment analysis and web scraping was a legitimate approach to solving our problem, as we've completed the production of a REST API which receives a barcode UPC and returns relevant information about the company of the product. We're also proud of how we were able to quickly turn around and change out full development stack in a few hours.
## What we learned
We have learned a great deal about the fullstack development process. There is a lot of work that needs to go into making a working Android application as well as a full REST API to deliver information from the server side. These are extremely valuable skills that can surely be put to use in the future.
## What's next for Ethiscan
We hope to transition from the web service to a full android app and possibly iOS app as well. We also hope to vastly improve the way we lookup companies and gather consumer scores alongside how we present the information.
|
## Inspiration
We were inspired by popular language-learning platforms like Duolingo and wanted to explore how an unconventional form of negative reinforcement could impact the learning process. The idea of introducing mild physical stimuli, such as a shock, stemmed from a curiosity about how different motivators can influence human behavior and retention.
## What it does
This app takes the standard question-and-answer format of language learning to the next level by incorporating a shock mechanism. If a user answers a question incorrectly, they receive a small shock, creating an intense reinforcement mechanism intended to encourage greater focus and memory retention.
## How we built it
We used React, Next.js for the front end. The hardware component involves a dog shock collar that delivers the shock, controlled via a Flipper Zero for signal transmission with the app. We utilized the OpenAI API to generate questions, options and answers and used a simple API to communicate between the learning platform and the wristband, synchronizing the user's responses with the feedback.
## Challenges we ran into
One of the main challenges was ensuring the shock intensity was both safe and effective. Finding a balance between an attention-getting stimulus and something that was not overly painful was tricky.
## Accomplishments that we're proud of
We're proud of creating a fully functional prototype that integrates hardware with software, something that required interdisciplinary work between software engineering and hardware design. We are also proud our frontend with 3D rendering
## What we learned
We learned a lot about the psychology of learning, specifically how negative reinforcement can be both a motivator and a deterrent. On the technical side, we improved our skills in hardware-software integration, ensuring user safety when dealing with physical stimuli.
## What's next for Brainstorm
For the future, we plan to conduct research studies to better understand the effects of this learning method. We also want to refine the hardware to make it more comfortable and potentially less intimidating. Additionally, we are exploring other forms of feedback, to provide users with a variety of reinforcement options.
|
winning
|
## Inspiration
My cousin recently had children, and as they enter their toddler years, I saw how they use their toys. They buy them, use them for a couple weeks, but then get bored. To be honest, however, I can't blame them. Currently, toys are used in one way, with no real interaction coming from both the child and the toy. What I really wanted to do was bring Toy Story to real life, and allow children to talk and learn from their toys, maximizing their happiness and education.
## What it does
We use Hume's API to allow you to talk with your toys and have full blown conversations with them. We utilize prompt engineering and allow you to have math lessons embedded within their choose your own adventure stories.
## How we built it
We embedded a raspberry pi, speaker, and microphone in the animal, which hosts a web app through which it can speak.
## Challenges we ran into
The Hume API was down sometimes which was tough to navigate, which halted our ability to make a self improving prompt (track progress of kids lessons). We also had a broken raspberry pi for the first 12 hours of the Hackathon. Hyperbolic randomly stopped working for us so our interactive story pictures stopped working.
## Accomplishments that we're proud of
Learning about how to prompt image generation (feeding the text transcription into Hyperbolic)
## What we learned
Image generation, prompt engineering, function calling
## What's next for Teddy.AI
Memory and lesson progress reports
|
## Inspiration
Everyone on our team comes from a family of newcomers and just as it is difficult to come into a new country, we had to adapt very quickly to the Canadian system. Our team took this challenge as an opportunity to create something that our communities could deeply benefit from when they arrive in Canada. A product that adapts to them, instead of the other way around. With some insight from our parents, we were inspired to create this product that would help newcomers to Canada, Indigenous peoples, and modest income families. Wealthguide will be a helping hand for many people and for the future.
## What it does
A finance program portal that provides interactive and accessible financial literacies to customers in marginalized communities improving their financially intelligence, discipline and overall, the Canadian economy 🪙. Along with these daily tips, users have access to brief video explanations of each daily tip with the ability to view them in multiple languages and subtitles. There will be short, quick easy plans to inform users with limited knowledge on the Canadian financial system or existing programs for marginalized communities. Marginalized groups can earn benefits for the program by completing plans and attempting short quiz assessments. Users can earn reward points ✨ that can be converted to ca$h credits for more support in their financial needs!
## How we built it
The front end was built using React Native, an open-source UI software framework in combination with Expo to run the app on our mobile devices and present our demo. The programs were written in JavaScript to create the UI/UX interface/dynamics and CSS3 to style and customize the aesthetics. Figma, Canva and Notion were tools used in the ideation stages to create graphics, record brainstorms and document content.
## Challenges we ran into
Designing and developing a product that can simplify the large topics under financial literacy, tools and benefits for users and customers while making it easy to digest and understand such information | We ran into the challenge of installing npm packages and libraries on our operating systems. However, with a lot of research and dedication, we as a team resolved the ‘Execution Policy” error that prevented expo from being installed on Windows OS | Trying to use the Modal function to enable pop-ups on the screen. There were YouTube videos of them online but they were very difficult to follow especially for a beginner | Small and merge errors prevented the app from running properly which delayed our demo completion.
## Accomplishments that we're proud of
**Kemi** 😆 I am proud to have successfully implemented new UI/UX elements such as expandable and collapsible content and vertical and horizontal scrolling. **Tireni** 😎 One accomplishment I’m proud of is that despite being new to React Native, I was able to learn enough about it to make one of the pages on our app. **Ayesha** 😁 I used Figma to design some graphics of the product bringing the aesthetic to life!
## What we learned
**Kemi** 😆 I learned the importance of financial literacy and responsibility and that FinTech is a powerful tool that can help improve financial struggles people may face, especially those in marginalized communities. **Tireni** 😎 I learned how to resolve the ‘Execution Policy” error that prevented expo from being installed on VS Code. **Ayesha** 😁 I learned how to use tools in Figma and applied it in the development of the UI/UX interface.
## What's next for Wealthguide
Newsletter Subscription 📰: Up to date information on current and today’s finance news. Opportunity for Wealthsimple product promotion as well as partnering with Wealthsimple companies, sponsors and organizations. Wealthsimple Channels & Tutorials 🎥: Knowledge is key. Learn more and have access to guided tutorials on how to properly file taxes, obtain a credit card with benefits, open up savings account, apply for mortgages, learn how to budget and more. Finance Calendar 📆: Get updates on programs, benefits, loans and new stocks including when they open during the year and the application deadlines. E.g OSAP Applications.
|
## Inspiration
Having Stanford CS teachers like Chris Piech has truly changed the way we learn about CS and AI. Yet, with new interests in AI due to innovation like Chat GPT, comes with a lack of technical infrastructure and visualization to help beginner coder understands algorithm. Rohan and I both personally struggled with understanding a specific algorithm: Breadth-First-Search at Stanford and seeked to find a way to explain it better to ourselves and others.
## What it does
Breadth-First-Learn is a simple visual interface for beginning coders of all ages to learn specifically about the Breadth First Search algorithm through creating their own personal node structures and visual graphs.
## How we built it
Using a combination of front head CSS and HTML code and javascript on the backend, we built a visual system to execute a Breadth first search algorithm.
## Challenges we ran into
As we brainstormed this challenge, we struggled to figure out the best way to implement this project. Would users best benefit from a mobile application, a chrome extension? We ultimately decided that an easy to use website that teachers could use as a demo or students could bookmark could be the most useful.
## Accomplishments that we're proud of
We are very proud of not only the process we took to gear our website to beginner user ease but also our end product. We incorporated a series of colorful visuals and took the time to learn new CSS stylings so that the algorithm would be clearly shown.
## What we learned
We learned not only a lot about our partnership as a team but the higher complexity of building simple systems and visuals. It would have been easy to create a janky platform to "show" the algorithm yet instead we
## What's next for Breadth-First-Learn!
Breadth-First-Learn is our first step in learning how to best visualize CS algorithms to create engaging experiences for beginner coders. We hope to incorporate similar codes and systems to visualize new and different algorithms in the future.
|
partial
|
## Inspiration
Let's say you’re an employee going home after work. It's a hard day's night, and you really need someone to talk to, or someone to share the joy of radio music. You consider becoming a uber on a ride-sharing platform, but you don't have the time to pick leave your route and pick up people at somewhere else. You wonder if you can randomly grab some pleasant people to talk to in the car and earn yourself a free cup of coffee. You wonder what App can help, and you found REBU.
Sometimes you're in a hurry to the other side of the city, but there is a terrific traffic jam. You don't want to travel on the crowded and sweaty bus, and it's too expensive to pay for a ride-sharing platform. You might be nervous about a presentation in the next meeting, or you need a creative idea. Therefore, you might want to talk to people or spend time thinking in silence. Without a doubt, you opened REBU, select a driver that fits your need, walk to their place, and had a great transportation experience.
## What it does
REBU is a driver-focused transportation platform for travelers to profit by giving others a ride. REBU helps drivers submit offers and provide passengers with the best offer available. For most people, commuting is boring and possibly never a comfortable experience. Choose a way to spending time with another soul in the city while earning a free cup of coffee, or get cheap rides!
## How we built it
Rebu is constructed using MongoDB backend and NodeJS. This powerful combination allowed us to create a website with the ability to log in, create and join rides.
## Challenges we ran into
Implementing features such as real-time matching algorithms, and user-friendly interfaces required overcoming technical challenges and ensuring scalability, reliability, and security.
## Accomplishments that we're proud of
Some members went in to the project with 0 knowledge of web development. Despite this, being able to learn and contribute to this project was a great accomplishment.
## What we learned
Centering divs was a great challenge.
## What's next for Rebu
Rebu aims to expand to incorporate it's own subscription service, it's services entail:
Subscribe to unlock the REBU match function, and remove ads! The incoming function gives recommendation to both drivers and passengers, making it easy for users to find the co-riders with the same interests, such as preference of talking in car or not, and personal interests and talking topics such as sports, food, music, and movies! If you're satisfied with the co-riding experience, set your rider/passenger as your regular REBU match to ride regularly!
|
## Inspiration
With the lack of accessible student housing in recent years, demand for public transportation has skyrocketed—including GO Transit buses that depart from McMaster University and head towards destinations in Mississauga and Brampton. Students wait in line for hours on end as buses fill up before they can board, entailing the dreaded wait for the next bus. And so, we were inspired to address this problem with time.ly, a web application that allows passengers to reserve their seats ahead of time, avoid physically standing in long lines, and ultimately encourage use of public transportation by improving its experience.
## What it does
time.ly is a web application that first, allows users to select their needed bus and time of arrival, and thus reserve their seat on that bus if there is a free spot. By eliminating the uncertainty of whether or not they would be able to fit on the bus once they arrive, time.ly saves bus commuters' time and removes the hassle of having to physically wait in a line for indefinite amounts of time.
## How we built it
Initial wireframes incorporating UX/UI design principles were created using Figma. The frontend was built with ReactJS, while the backend was written with Python and deployed with Flask.
## Challenges we ran into
The biggest challenge we faced was implementing the backend in a way that would complement the frontend. Since none of us had prior experience in this area, a lot of time was dedicated to researching and deciding on technologies to use (or not use). We ended up adding a backend component to our web application using Flask, and learning this new skill within hours of the submission deadline was a big endeavour.
## Accomplishments that we're proud of
In addition to creating a polished, functional frontend that fulfilled the initial design constraints we set out with, we're proud of being able to involve a working backend component to our web application, as it was a new feat for us.
## What we learned
We've learned a lot about full stack development and the challenges of getting a frontend and backend to communicate with each other via Flask in an effective and efficient manner. With the original ambition to implicate real life data into the project, we also learned about the difficulties of training a machine learning backend—though our efforts in the task were continuous, the result was ultimately not able to be realized. We further developed skills in problem-solving, whether that be in determining how to best define user needs, breaking down bigger solutions into step-by-step chunks, or writing algorithms centred around optimization.
## What's next for time.ly
The main feature we would aim to implement in time.ly is a machine learning backend that pulls data from cities and routes across the region to predict optimal boarding times for passengers. Another possibility is using a cloud service to deliver live updates and real-time route suggestions. On the frontend, further functionalities such as collecting and storing user inputs, delivering reminder notifications via SMS or Mail APIs, visual representations of bus capacity and seat allocation, and so on. Because of the flexible environment of time.ly, another potential development would be its integration with students' existing PRESTO cards and transit services.
|
## Inspiration
As university students, we and our peers have found that our garbage and recycling have not been taken by the garbage truck for some unknown reason. They give us papers or stickers with warnings, but these get lost in the wind, chewed up by animals, or destroyed because of the weather. For homeowners or residents, the lack of communication is frustrating because we want our garbage to be taken away and we don't know why it wasn't. For garbage disposal workers, the lack of communication is detrimental because residents do not know what to fix for the next time.
## What it does
This app allows garbage disposal employees to communicate to residents about what was incorrect with how the garbage and recycling are set out on the street. Through a checklist format, employees can select the various wrongs, which are then compiled into an email and sent to the house's residents.
## How we built it
The team built this by using a Python package called **Kivy** that allowed us to create a GUI interface that can then be packaged into an iOS or Android app.
## Challenges we ran into
The greatest challenge we faced was the learning curve that arrived when beginning to code the app. All team members had never worked on creating an app, or with back-end and front-end coding. However, it was an excellent day of learning.
## Accomplishments that we're proud of
The team is proud of having a working user interface to present. We are also proud of our easy to interactive and aesthetic UI/UX design.
## What we learned
We learned skills in front-end and back-end coding. We also furthered our skills in Python by using a new library, Kivy. We gained skills in teamwork and collaboration.
## What's next for Waste Notify
Further steps for Waste Notify would likely involve collecting data from Utilities Kingston and the city. It would also require more back-end coding to set up these databases and ensure that data is secure. Our target area was University District in Kingston, however, a further application of this could be expanding the geographical location. However, the biggest next step is adding a few APIs for weather, maps and schedule.
|
losing
|
## Inspiration
Today, Generative AI offers an impressive number of innovative solutions to previously unsolvable problems. However, the interactions between users and generative AI continue to feel, well, artificial. Generative AI offers the potential for a machine to take on roles that people have previously had to take and, as population decline affects many communities, will be made naturally vacant. To ensure a smoother transition to a world in which Generative AI is commonplace, we need to explore how users and Generative AI interact. To solve this problem, we devised the Generative Visual Novel (GenVN for short), a tool in which users and Generative AI models interact to form a visual narrative.
## What it does
GenVN is designed to offer a “choose your own adventure” style narrative that provides visuals and related dialogue for an educational and engaging experience. Tightly packaged together are an image theater, a dialogue box, and a button to submit inputs. From start to finish, the user is able to type an input to kickstart a story. GenVN then works wonders by combining Stable Diffusion XL and Llama 2 to dually generate response text meant to advance the narrative according to the user’s decisions and represent the scene in color. From there, the back-and-forth, a harmony between machine and man, can continue forever. The only limitation is the user’s appetite for engagement.
## How we built it
For the web app, the base of our application, we utilized Reflex to build our frontend and backend experiences in Python. We brought in components from Reflex’s library, while also folding in our CSS flair, to create our frontend display and constructed our backend in Python. In the next layer, we incorporated Monster API’s Stable Diffusion XL model for image generation and Llama 2’s 7B parameter model for effective narrative text generation. What really makes the project our own, though, has been our unique prompt engineering, carefully designed and tested to hone in on the narrative experience. Through cultivating a number of back-and-forth conversations with these models and utilizing user feedback at each step, we’ve been able to refine the cooperative element of the program.
## Challenges we ran into
When working on this project, problems appeared one after another. From troubleshooting technologies the team had never seen before to struggling to obtain sufficient user feedback, we had to stop, rethink, and rebuild again and again. While we eventually overcame the many small issues we encountered, the greatest challenge we found was our own ambition. We had so many interests and ideas to explore that we never had the time to realize that, even as we finished our project, we were forced to consider what more we could have made. If nothing else, the greatest reconciliation is that, even after Treehacks ends, the chance to make a more perfect GenVN is always ahead of us.
## Accomplishments that we're proud of
We made an entire website from scratch in effectively less than 24 hours. On top of that, it was primarily built upon an API that no member of the team had used before along with a framework that no member of the team had used before. However, most of all, we made a project that captured a piece of our original vision. It may not have been everything we wanted, but it has a piece of the soul of human-AI collaboration that we hoped to better understand. We were able to interact with Generative AI in a new medium that we never had before and share that experience with other students who, like us, want to know what lies on our shared horizon.
## What we learned
Learning new frameworks from scratch and under time constraints is a fun challenge—definitely easier than learning an entirely new language (which was the mistake of some of our members last year)—but still a daunting one. Additionally, finding diverse user feedback from a user base that has a genuine interest in your work is incredibly useful and should be sought out as soon as a minimally viable product exists. Finally, bringing people together who share a vision is what distinguishes an ordinary project from a great one. After all, almost anyone can just work together, but it takes a common heart and mind to endure the real lows and challenges that an ambitious project entails.
## What's Next for Generative Visual Novel (GenVN)
In a word, soul. While cut out of necessity in our demo, we want adaptive character portraits/models (which change based on the tone of the AI’s reply) to be integrated into our narrative experience to represent the characters that the user talks to. We can only really improve further human-AI cooperation if we can put a face to each mechanical voice. Our first step would be to generate base character designs with Stable Diffusion XL and then modify these base models with PhotoMaker (a Monster API model that modifies images based on text) to breathe life into the generated personalities. We also want to implement a streaming stylization to text generation, printing the model’s response to the user one character at a time to give a more human feel to the model and improve the connection between the user and the machine. Finally, we want to implement a chat history system in which the user can easily swap between any moment in the conversation with the models to better enable specific interactions and outcomes in the narrative process.
|
## Inspiration
We loved picture books as children and making up our own stories. Now, it's easier than ever to write these stories into a book using AI!
## What it does
* Helps children write their own stories
* Illustrates stories for children
* Collects a child's story into a picture book, sharable to their friends and family
* Use the emotion of your voice to guide the story
## How we built it
We used
* React for the UI (display and state management)
* hume.ai to facilitate the end-to-end conversation
* DALL-E to illustrate stories
* Firebase for saving stories
## Challenges we ran into
### 1. Expiring Image URLs
The format of the OpenAI DALL-E API's response is an image URL. We encountered two challenges with this URL: latency and expiration. First, the response took up to five seconds to load the image. Second, the images expired after a set number of hours, becoming inaccessible and broken on our site.
To solve this challenge, we downloaded the image and re-uploaded it to Firebase storage, replacing the stored image URL with the Firebase URL. This was not possible on our existing frontend due to CORS, so we wrote a node backend to perform this processing.
### 2. Sensitive Diffusion Model Prompts
Initially, we directly used the generated story text of each page as the prompt to DALL-E, the diffusion model we used for illustrations. The generated images were extremely low quality and oftentimes did not match the prompt at all.
We suspect the reason is that diffusion models are trained quite differently from transformers. While transformers are pre-trained on the next token prediction task with very long text sequences, diffusion models are trained to accept a shorter prompts with more modifying attributes.
Therefore, we added a preprocessing step that extracts a five-word summary of the prompt and a list of five attributes. This step dramatically improved the quality of output illustrations.
## Accomplishments that we're proud of
* An end-to-end loop to create a story book by speaking the story aloud
* An aesthetic interface for viewing finished story books
## What we learned
* AI prompts are very sensitive (esp. prompts for diffusion models). Even the difference of a single word can drastically change the output!
## What's next for StoryBook AI
To focus on the core functionality of the app, we omitted several things that we would want to build after the hackathon:
1. User accounts. Users should be able to create accounts, possibly with linked parental accounts also for parents to view stories their children made.
2. Difficulty settings. Our goal is to improve the creativity and storytelling abilities of children. Younger children may need more assistance with difficulty while older children may focus on more complex literary elements such as plot and character development. We would like to tailor the questions the AI raises to each child's ability.
3. Customization. Users should be able to customize the look and feel of their own stories including themes and special styles.
|
## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
|
losing
|
## Inspiration 💥
Let's be honest... Presentations can be super boring to watch—*and* to present.
But, what if you could bring your biggest ideas to life in a VR world that literally puts you *in* the PowerPoint? Step beyond slides and into the future with SuperStage!
## What it does 🌟
SuperStage works in 3 simple steps:
1. Export any slideshow from PowerPoint, Google Slides, etc. as a series of images and import them into SuperStage.
2. Join your work/team/school meeting from your everyday video conferencing software (Zoom, Google Meet, etc.).
3. Instead of screen-sharing your PowerPoint window, screen-share your SuperStage window!
And just like that, your audience can watch your presentation as if you were Tim Cook in an Apple Keynote. You see a VR environment that feels exactly like standing up and presenting in real life, and the audience sees a 2-dimensional, front-row seat video of you on stage. It’s simple and only requires the presenter to own a VR headset.
Intuition was our goal when designing SuperStage: instead of using a physical laser pointer and remote, we used full-hand tracking to allow you to be the wizard that you are, pointing out content and flicking through your slides like magic. You can even use your hands to trigger special events to spice up your presentation! Make a fist with one hand to switch between 3D and 2D presenting modes, and make two thumbs-up to summon an epic fireworks display. Welcome to the next dimension of presentations!
## How we built it 🛠️
SuperStage was built using Unity 2022.3 and the C# programming language. A Meta Quest 2 headset was the hardware portion of the hack—we used the 4 external cameras on the front to capture hand movements and poses. We built our UI/UX using ray interactables in Unity to be able to flick through slides from a distance.
## Challenges we ran into 🌀
* 2-camera system. SuperStage is unique since we have to present 2 different views—one for the presenter and one for the audience. Some objects and UI in our scene must be occluded from view depending on the camera.
* Dynamic, automatic camera movement, which locked onto the player when not standing in front of a slide and balanced both slide + player when they were in front of a slide.
To build these features, we used multiple rendering layers in Unity where we could hide objects from one camera and make them visible to the other. We also wrote scripting to smoothly interpolate the camera between points and track the Quest position at all times.
## Accomplishments that we're proud of 🎊
* We’re super proud of our hand pose detection and gestures: it really feels so cool to “pull” the camera in with your hands to fullscreen your slides.
* We’re also proud of how SuperStage uses the extra dimension of VR to let you do things that aren’t possible on a laptop: showing and manipulating 3D models with your hands, and immersing the audience in a different 3D environment depending on the slide. These things add so much to the watching experience and we hope you find them cool!
## What we learned 🧠
Justin: I found learning about hand pose detection so interesting. Reading documentation and even anatomy diagrams about terms like finger abduction, opposition, etc. was like doing a science fair project.
Lily: The camera system! Learning how to run two non-conflicting cameras at the same time was super cool. The moment that we first made the switch from 3D -> 2D using a hand gesture was insane to see actually working.
Carolyn: I had a fun time learning to make cool 3D visuals!! I learned so much from building the background environment and figuring out how to create an awesome firework animation—especially because this was my first time working with Unity and C#! I also grew an even deeper appreciation for the power of caffeine… but let’s not talk about that part :)
## What's next for SuperStage ➡️
Dynamically generating presentation boards to spawn as the presenter paces the room
Providing customizable avatars to add a more personal touch to SuperStage
Adding a lip-sync feature that takes volume metrics from the Oculus headset to generate mouth animations
|
## 💡 Inspiration💡
Our team is saddened by the fact that so many people think that COVID-19 is obsolete when the virus is still very much relevant and impactful to us. We recognize that there are still a lot of people around the world that are quarantining—which can be a very depressing situation to be in. We wanted to create some way for people in quarantine, now or in the future, to help them stay healthy both physically and mentally; and to do so in a fun way!
## ⚙️ What it does ⚙️
We have a full-range of features. Users are welcomed by our virtual avatar, Pompy! Pompy is meant to be a virtual friend for users during quarantine. Users can view Pompy in 3D to see it with them in real-time and interact with Pompy. Users can also view a live recent data map that shows the relevance of COVID-19 even at this time. Users can also take a photo of their food to see the number of calories they eat to stay healthy during quarantine. Users can also escape their reality by entering a different landscape in 3D. Lastly, users can view a roadmap of next steps in their journey to get through their quarantine, and to speak to Pompy.
## 🏗️ How we built it 🏗️
### 🟣 Echo3D 🟣
We used Echo3D to store the 3D models we render. Each rendering of Pompy in 3D and each landscape is a different animation that our team created in a 3D rendering software, Cinema 4D. We realized that, as the app progresses, we can find difficulty in storing all the 3D models locally. By using Echo3D, we download only the 3D models that we need, thus optimizing memory and smooth runtime. We can see Echo3D being much more useful as the animations that we create increase.
### 🔴 An Augmented Metaverse in Swift 🔴
We used Swift as the main component of our app, and used it to power our Augmented Reality views (ARViewControllers), our photo views (UIPickerControllers), and our speech recognition models (AVFoundation). To bring our 3D models to Augmented Reality, we used ARKit and RealityKit in code to create entities in the 3D space, as well as listeners that allow us to interact with 3D models, like with Pompy.
### ⚫ Data, ML, and Visualizations ⚫
There are two main components of our app that use data in a meaningful way. The first and most important is using data to train ML algorithms that are able to identify a type of food from an image and to predict the number of calories of that food. We used OpenCV and TensorFlow to create the algorithms, which are called in a Python Flask server. We also used data to show a choropleth map that shows the active COVID-19 cases by region, which helps people in quarantine to see how relevant COVID-19 still is (which it is still very much so)!
## 🚩 Challenges we ran into
We wanted a way for users to communicate with Pompy through words and not just tap gestures. We planned to use voice recognition in AssemblyAI to receive the main point of the user and create a response to the user, but found a challenge when dabbling in audio files with the AssemblyAI API in Swift. Instead, we overcame this challenge by using a Swift-native Speech library, namely AVFoundation and AVAudioPlayer, to get responses to the user!
## 🥇 Accomplishments that we're proud of
We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for while interacting with it, virtually traveling places, talking with it, and getting through quarantine happily and healthily.
## 📚 What we learned
For the last 36 hours, we learned a lot of new things from each other and how to collaborate to make a project.
## ⏳ What's next for ?
We can use Pompy to help diagnose the user’s conditions in the future; asking users questions about their symptoms and their inner thoughts which they would otherwise be uncomfortable sharing can be more easily shared with a character like Pompy. While our team has set out for Pompy to be used in a Quarantine situation, we envision many other relevant use cases where Pompy will be able to better support one's companionship in hard times for factors such as anxiety and loneliness. Furthermore, we envisage the Pompy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene, exercise tips and even lifestyle advice, Pompy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery.
\*\*we had to use separate github workspaces due to conflicts.
|
## Inspiration
As students, we want to travel to experience other cultures and to see the world outside of our own bubbles. However, the abundance of information available can make it challenging to decide on a specific destination or experience.
## What it does
Xplore is an all-in-one mobile application designed for indecisive students who want to travel, but struggle to decide where to go and how to get there. As a team of university students ourselves, we know the feeling of wanting to get away from the endless pile of assignments and midterms, but also the feeling of uncertainty and paralysis when actually faced with the thought of planning a trip. "I want to go backpacking in Europe, but where should I start? How should I get there?" The goal of Xplore is to reduce the information overload experienced by students and to leverage a custom recommendation engine trained on travel data to present end-to-end trip itineraries for users based on their personal preferences/interests.
## How we built it
For our frontend, we implemented Java classes to host the UI layouts in XML in Android Studio. For our backend, we implemented REST APIs using Express.js in VS Code and wrote our database models and controllers in Node.js. All user profile information was stored in our MySQL database. We tested our REST API endpoints using Postman to ensure that we can store the JSON objects from our frontend in our database.
## Challenges we ran into
Integration between the frontend and backend. We faced challenges being able to send JSON objects from our frontend to our REST API.
## Accomplishments that we're proud of
Under a time constraint, we are proud of developing a Minimally Viable Product (MVP) that integrated frontend and backend functionalities that tackled an important problem.
## What we learned
We learned how to develop a full stack mobile application using Android Studio, XML, Java, Express.js, Node.js, with a complete MySQL database tested using Postman. We also learned how to keep our scope minimal when designing for users and their needs.
## What's next for Xplore
Continue training our machine learning model and integrate the recommendation engine with our backend. Refine frontend and backend integration.
|
winning
|
## Inspiration
3-D Printing. It has been around for decades, yet the printing process is often too complex to navigate, labour intensive and time consuming. Although the technology exists, it is only used by those who are trained in the field because of the technical skills required to operate the machine. We want to change all that. We want to make 3-D printing simpler, faster, and accessible for everyone. By leveraging the power of IoT and Augmented Reality, we created a solution to bridge that gap.
## What it does
Printology revolutionizes the process of 3-D printing by allowing users to select, view and print files with a touch of a button. Printology is the first application that allows users to interact with 3-D files in augmented reality while simultaneously printing it wirelessly. This is groundbreaking because it allows children, students, healthcare educators and hobbyists to view, create and print effortlessly from the comfort of their mobile devices. For manufacturers and 3-D Farms, it can save millions of dollars because of the drastically increased productivity.
The product is composed of a hardware and a software component. Users can download the iOS app on their devices and browse a catalogue of .STL files. They can drag and view each of these items in augmented reality and print it to their 3-D printer directly from the app. Printology is compatible with all models of printers on the market because of the external Raspberry Pi that generates a custom profile for each unique 3-D printer. Combined, the two pieces allow users to print easily and wirelessly.
## How I built it
We built an application in XCode that uses Apple’s AR Kit and converts STL models to USDZ models, enabling the user to view 3-D printable models in augmented reality. This had never been done before, so we had to write our own bash script to convert these models. Then we stored these models in a local server using node.js. We integrated functions into the local servers which are called by our application in Swift.
In order to print directly from the app, we connected a Raspberry Pi running Octoprint (a web based software to initialize the 3-D printer). We also integrated functions into our local server using node.js to call functions and interact with Octoprint. Our end product is a multifunctional application capable of previewing 3-D printable models in augmented reality and printing them in real time.
## Challenges I ran into
We created something that had never been done before hence we did not have a lot of documentation to follow. Everything was built from scratch. In other words this project needed to be incredibly well planned and executed in order to achieve a successful end product. We faced many barriers and each time we pushed through. Here were some major issues we faced.
1. No one on our team had done iOS development before and we a lot through online resources and trial and error. Altogether we watched more than 12 hours of YouTube tutorials on Swift and XCode - It was quite a learning curve. Ultimately with insane persistence, a full all-nighter and the generous help of the Deltahacks mentors, we troubleshooted errors and found new ways of getting around problems.
2. No one on our team had experience in bash or node.js. We learned everything from the Google and our mentors. It was exhausting and sometimes downright frustrating. Learning the connection between our javascript server and our Swift UI was extremely difficult and we went through loads of troubleshooting for our networks and IP addresses.
## Accomplishments that I'm proud of and what I've Learned
We're most proud of learning the integration of multiple languages, APIs and devices into one synchronized system. It was the first time that this had been done before and most of the software was made in house. We learned command line functions and figured out how to centralize several applications to provide a solution. It was so rewarding to learn an entirely new language and create something valuable in 24 hours.
## What's next for Print.ology
We are working on a scan feature on the app that allows users to do a 3-D scan with their phone of any object and be able to produce a 3-D printable STL file from the photos. This has also never been accomplished before and it would allow for major advancements in rapid prototyping. We look forward to integrating machine learning techniques to analyze a 3-D model and generate settings that reduce the number of support structures needed. This would reduce the waste involved in 3-D printing. A future step would be to migrate our STL files o a cloud based service in which users can upload their 3-D models.
|
## What it does
Using Blender's API and a whole lot of math, we've created a service that allows you to customize and perfectly fit 3D models to your unique dimensions. No more painstaking adjustments and wasted 3D prints necessary, simply select your print, enter your sizes, and download your fitted prop within a few fast seconds. We take in specific wrist, forearm, and length measurements and dynamically resize preset .OBJ files without any unsavory warping. Once the transformations are complete, we export it right back to you ready to send off to the printers.
## Inspiration
There's nothing cooler than seeing your favorite iconic characters coming to life, and we wanted to help bring that magic to 3D printing enthusiasts! Just starting off as a beginner with 3D modeling can be a daunting task -- trust us, most of the team are in the same boat with you. By building up these tools and automation scripts we hope to pave a smoother road for people interested in innovating their hobbies and getting out cool customized prints out fast.
## Next Steps
With a little bit of preprocessing, we can let any 3D modeler upload their models to our web service and have them dynamically fitted in no time! We hope to grow our collection of available models and make 3D printing much easier and more accessible for everyone. As it grows we hope to make it a common tool in every 3D artists arsenal.
*Special shoutout to Pepsi for the Dew*
|
# ThoughtWheels
The rise of Artificial Intelligence and technological advancements has significantly boosted productivity for many. However, there's a noticeable gap in applying these innovations to aid the disabled and elderly. Addressing this, we created a transformative solution: a **smart wheelchair prototype**. This wheelchair is uniquely controlled by **EEG signals**, enabling mobility for those with paralysis, and offering older individuals the ability to move freely and continue their daily tasks without physical constraints. This project isn't just about mobility; it's about restoring independence and quality of life.
## Challenges Faced
One of the primary technical challenges we faced was developing a reliable method for interpreting EEG signals into precise commands for the wheelchair. Capturing the electrical activity of the brain with the **Muse 2** headset and translating it into actionable inputs required sophisticated signal processing algorithms. We had to ensure the system could accurately differentiate between intentional commands and involuntary brain activity. Additionally, integrating this technology into a wheelchair in a way that was both safe and effective presented its own set of engineering hurdles, including optimizing the system to reduce latency as much as possible.
## Our Mission
Our mission extends beyond mobility. We aim to harness EEG signals as a bridge between machines and the human body. With additional funding, we plan to expand our technology to monitor stress levels and other vital metrics, utilizing EEG data to enhance mental health. This innovation will empower individuals to understand and manage their stress, paving the way for a healthier, more connected future.
## Technology and Innovation
To achieve our goals, we utilized the **Muse 2** headset to record EEG signals. This innovative approach allowed us to capture the electrical activity of the brain with precision. We then developed a system to interpret these signals, focusing on microaggressions like blinks and slight head movements, as inputs to control the movement of the wheelchair. This method of control is not only intuitive but also enables users with severe mobility restrictions to command the wheelchair effortlessly, showcasing our commitment to enhancing accessibility and independence through technology.
Our smart wheelchair prototype is more than a mobility aid; it's a step towards a future where technology bridges the gap between disability and independence, enabling everyone to live their lives to the fullest. We believe in the power of innovation to change lives, and with the right support, we can make this vision a reality.
|
winning
|
### Saturday 11AM: Starting Out
>
> *A journey of a thousand miles begins with a single step*
>
>
>
BusBuddy is pulling the curtain back on school buses. Students and parents should have equal access to information to know when and where their buses are arriving, how long it will take to get to school, and be up-to-date on any changes in routes. When we came onboard the project, our highest priorities were efficiency, access, and sustainability.
With our modern version of a solution to the traveling salesman problem, we hope to give students and parents some peace of mind when it comes to school transportation. Not only will BusBuddy make the experience more comfortable, but having reliable information means more parents will opt to save on gas and send their kids by bus.
### Saturday 3PM: Roadblocks, Missteps, Obstacles
>
> *I would walk a thousand miles just to fall down at your door*
>
>
>
No road is without its potholes; our road was no exception to this. Alongside learning curves and getting to know each other, we faced issues with finicky APIs that disagreed with our input data, temperamental CSS margins that refused to anchor where we wanted them, and missing lines of code that we swear we put in. With enough time and bubble tea, we found our critical errors and began to build our vision.
### Saturday 9PM: Finding Our Way
>
> *Just keep swimming, just keep swimming, just keep swimming, swimming, swimming…*
>
>
>
We conceptualized in Figma with asset libraries; we built our front-end in VS Code with HTML, CSS, and Jinja2; we used Flask, Python, SQL databases, and a Google Maps API, alongside the Affinity Propagation Clustering algorithm, to cluster home addresses; and finally, we ran a recursive DFS on a directed weighted graph to optimize a route for bus pickup of all students.
### Sunday 7AM: Summiting the Peak
>
> *Planting a flag at the top*
>
>
>
We achieved our minimum viable product! Given that our expectations were not low, it was no easy feat to climb this mountain.
### Sunday 11AM: Journey’s End
>
> *The journey matters more than the destination*
>
>
>
With a team composed of an 11th grader, a 12th grader, a UWaterloo first year, and a Mac second year, we certainly did not lack in range of experiences to bring to the table. Our biggest asset was having each other as sounding boards to bounce ideas off of. Getting to collaborate with each other certainly broadened our worldviews, especially with each others’ anecdotes about school pre-, during, and post-COVID.
### Sunday Onward
>
> *New Horizons*
>
>
>
So what’s next for us? And what’s next for BusBuddy?
Well, we’ll be doing some sleeping. As for BusBuddy, we hope to scale up and turn our application into something that BusBuddy’s students can use for years to come.
|
## Inspiration
We were inspired by the numerous Facebook posts, Slack messages, WeChat messages, emails, and even Google Sheets that students at Stanford create in order to coordinate Ubers/Lyfts to the airport as holiday breaks approach. This was mainly for two reasons, one being the safety of sharing a ride with other trusted Stanford students (often at late/early hours), and the other being cost reduction. We quickly realized that this idea of coordinating rides could also be used not just for ride sharing to the airport, but simply transportation to anywhere!
## What it does
Students can access our website with their .edu accounts and add "trips" that they would like to be matched with other users for. Our site will create these pairings using a matching algorithm and automatically connect students with their matches through email and a live chatroom in the site.
## How we built it
We utilized Wix Code to build the site and took advantage of many features including Wix Users, Members, Forms, Databases, etc. We also integrated SendGrid API for automatic email notifications for matches.
## Challenges we ran into
## Accomplishments that we're proud of
Most of us are new to Wix Code, JavaScript, and web development, and we are proud of ourselves for being able to build this project from scratch in a short amount of time.
## What we learned
## What's next for Runway
|
Track: Social Good
[Disaster Relief]
[Best Data Visualization Hack]
[Best Social Impact Hack]
[Best Hack for Building Community]
## Inspiration
Do you ever worry about what is in your water? While some of us live in the luxury of getting clean water for granted, some of us do not. In Flint, MI, another main water pipeline broke. Again. Under water boil advisory, citizens are subject to government inaction and lack of communication. Our goal is to empower communities with what is in their water using data analytics and crowd sourced reports of pollutants in tap water found in people's homes.
## What it does
Water Crusader is an app that uses two categories of data to give communities an informed assessment of their water quality: publicly available government data and crowd sourced tap water assessments taken from people's homes. Firstly, it takes available government data of blood lead levels tested in children, records of home age to indicate the age of utilities and materials used in pipes, and income levels as an indicator of maintenance in a home. With the programs we have built, governments can expand our models by giving us more data to use. Secondly, users are supplied with a cost-effective, open source IOT sensor that uploads water quality measurements to the app. This empowers citizens to participate in knowing the water quality of their communities. As a network of sensors is deployed, real-time, crowd-sourced data can greatly improve our risk assessment models. By fusing critical information from these two components, we use regression models to give our users a risk measurement of lead poisoning from their water pipelines. Armed with this knowledge, citizens are empowered with being able to make more informed health decisions and call their governments to act on the problem of lead in drinking water.
## How we built it
Hardware:
To simulate the sensor system with the available hardware materials at HackMIT, we used a ESP32 and DHT11 Temperature and Humidity Sensor. The ESP32 takes temperature and humidity data read from DHT11. Data is received in Nose-RED json by specifying HTTP request and the sending actual post in the Arduino IDE.
Data Analytics:
Using IBM's Watson Studio and AI development tools, data from government sources were cleaned and used to model lead poisoning risk. Blood lead levels tested in children from the Center for Disease Control was used as the feature we wanted to predict. House age and poverty levels taken from the U.S. Census were used to predict blood lead levels tested in children from the Center for Disease Control.
## Challenges we ran into
1. We are limited by the amount of hardware available. We tried our best to create the best simulation of the sensor system as possible.
2. It was hard to retrieve and clean government data, especially with the need to make data requests.
3. Not all of us are familiar with Node-RED js and there was a lot to learn!
## Accomplishments that we're proud of
1.Learning new software!
1. Working in a super diverse team!!
## What's next for Water Crusader
We will continue to do research for different water sensors that can be added to our system. From our research, we learned that there is a lead sensor in development.
|
partial
|
## What it does
Blink is a communication tool for those who cannot speak or move, while being significantly more affordable and accurate than current technologies on the market. [The ALS Association](http://www.alsa.org/als-care/augmentative-communication/communication-guide.html) recommends a $10,000 communication device to solve this problem—but Blink costs less than $20 to build.
You communicate using Blink through a modified version of **Morse code**. Blink out letters and characters to spell out words, and in real time from any device, your caretakers can see what you need. No complicated EEG pads or camera setup—just a small, unobtrusive sensor can be placed to read blinks!
The Blink service integrates with [GIPHY](https://giphy.com) for GIF search, [Earth Networks API](https://www.earthnetworks.com) for weather data, and [News API](https://newsapi.org) for news.
## Inspiration
Our inspiration for this project came from [a paper](http://www.wearabletechnologyinsights.com/articles/11443/powering-devices-through-blinking) published on an accurate method of detecting blinks, but it uses complicated, expensive, and less-accurate hardware like cameras—so we made our own **accurate, low-cost blink detector**.
## How we built it
The backend consists of the sensor and a Python server. We used a capacitive touch sensor on a custom 3D-printed mounting arm to detect blinks. This hardware interfaces with an Arduino, which sends the data to a Python/Flask backend, where the blink durations are converted to Morse code and then matched to English characters.
The frontend is written in React with [Next.js](https://github.com/zeit/next.js) and [`styled-components`](https://styled-components.com). In real time, it fetches data from the backend and renders the in-progress character and characters recorded. You can pull up this web app from multiple devices—like an iPad in the patient’s lap, and the caretaker’s phone. The page also displays weather, news, and GIFs for easy access.
**Live demo: [blink.now.sh](https://blink.now.sh)**
## Challenges we ran into
One of the biggest technical challenges building Blink was decoding blink durations into short and long blinks, then Morse code sequences, then standard characters. Without any libraries, we created our own real-time decoding process of Morse code from scratch.
Another challenge was physically mounting the sensor in a way that would be secure but easy to place. We settled on using a hat with our own 3D-printed mounting arm to hold the sensor. We iterated on several designs for the arm and methods for connecting the wires to the sensor (such as aluminum foil).
## Accomplishments that we're proud of
The main point of PennApps is to **build a better future**, and we are proud of the fact that we solved a real-world problem applicable to a lot of people who aren't able to communicate.
## What we learned
Through rapid prototyping, we learned to tackle difficult problems with new ways of thinking. We learned how to efficiently work in a group with limited resources and several moving parts (hardware, a backend server, a frontend website), and were able to get a working prototype ready quickly.
## What's next for Blink
In the future, we want to simplify the physical installation, streamline the hardware, and allow multiple users and login on the website. Instead of using an Arduino and breadboard, we want to create glasses that would provide a less obtrusive mounting method. In essence, we want to perfect the design so it can easily be used anywhere.
Thank you!
|
## Inspiration
Only a small percentage of Americans use ASL as their main form of daily communication. Hence, no one notices when ASL-first speakers are left out of using FaceTime, Zoom, or even iMessage voice memos. This is a terrible inconvenience for ASL-first speakers attempting to communicate with their loved ones, colleagues, and friends.
There is a clear barrier to communication between those who are deaf or hard of hearing and those who are fully-abled.
We created Hello as a solution to this problem for those experiencing similar situations and to lay the ground work for future seamless communication.
On a personal level, Brandon's grandma is hard of hearing, which makes it very difficult to communicate. In the future this tool may be their only chance at clear communication.
## What it does
Expectedly, there are two sides to the video call: a fully-abled person and a deaf or hard of hearing person.
For the fully-abled person:
* Their speech gets automatically transcribed in real-time and displayed to the end user
* Their facial expressions and speech get analyzed for sentiment detection
For the deaf/hard of hearing person:
* Their hand signs are detected and translated into English in real-time
* The translations are then cleaned up by an LLM and displayed to the end user in text and audio
* Their facial expressions are analyzed for emotion detection
## How we built it
Our frontend is a simple React and Vite project. On the backend, websockets are used for real-time inferencing. For the fully-abled person, their speech is first transcribed via Deepgram, then their emotion is detected using HumeAI. For the deaf/hard of hearing person, their hand signs are first translated using a custom ML model powered via Hyperbolic, then these translations are cleaned using both Google Gemini and Hyperbolic. Hume AI is used similarly on this end as well. Additionally, the translations are communicated back via text-to-speech using Cartesia/Deepgram.
## Challenges we ran into
* Custom ML models are very hard to deploy (Credits to <https://github.com/hoyso48/Google---American-Sign-Language-Fingerspelling-Recognition-2nd-place-solution>)
* Websockets are easier said than done
* Spotty wifi
## Accomplishments that we're proud of
* Learned websockets from scratch
* Implemented custom ML model inferencing and workflows
* More experience in systems design
## What's next for Hello
Faster, more accurate ASL model. More scalability and maintainability for the codebase.
|
## Inspiration
We wanted to build an app that reduces stress and helps users in dire situations such as that of being injured.
## What it does
The app allows users to take or upload a picture of an injury then identifies the injury and provides immediate treatment solutions.
## How I built it
The app was built with Javascript on Expo.io which allows the app to run on both iOS and Android. The Google Cloud Platform Cloud Vision API was used to recognize the injuries in the photos provided by the users.
## Challenges I ran into
The team was unable to import the Google Vision API as a node module. To overcome this issue, the team used Ajax request to send data to the API to upload the images into the Cloud.
## Accomplishments that I'm proud of
Coming up with a cool name!
## What I learned
The team learned to use Expo API and Google Cloud API.
## What's next for BruiseClues
Bruise Clues aims to allow the app to take user feedback about the details of their injuries such that the diagnoses will become more specific and accurate. Furthermore, the team wishes to implement Blues Clues in health service technologies to improve the quality and promptness of health services.
|
partial
|
## Inspiration
Lyft's round up and donate system really inspired us here.
We wanted to find a way to benefit both users and help society. We all want to give back somehow, but don't know how sometimes or maybe we want to donate, but don't really know how much to give back or if we could afford it.
We wanted an easy way incorporated into our lives and spending habits.
This would allow us to reach a wider amount of users and utilize the power of the consumer society.
## What it does
With a chrome extension like "Heart of Gold", the user gets every purchase's round up to nearest dollar (for example: purchase of $9.50 has a round up of $10, so $0.50 gets tracked as the "round up") accumulated. The user gets to choose when they want to donate and which organization gets the money.
## How I built it
We built a web app/chrome extension using Javascript/JQuery, HTML/CSS.
Firebase javascript sdk library helped us store the calculations of the accumulation of round up's.
We make an AJAX call to the Paypal API, so it took care of payment for us.
## Challenges I ran into
For all of the team, it was our first time creating a chrome app extension. For most of the team, it was our first time heavily working with javascript let alone using technologies like Firebase and the Paypal API.
Choose what technology/platform would make the most sense was tough, but the chrome extension would allow for more relevance since a lot of people make more online purchases nowadays and an extension can run in the background/seem omnivalent.
So we picked up the javascript language to start creating the extension. Lisa Lu integrated the PayPal API to handle donations and used HTML/CSS/JavaScript to create the extension pop-up. She also styled the user interface.
Firebase was also completely new to us, but we chose to use it because it didn't require us to have a two step process: a server (like Flask) + a database (like mySQL or MongoDB). It also helped that we had a mentor guide us through. We learned a lot about the Javascript language (mostly that we haven't even really scratched the surface of it), and the importance of avoiding race conditions. We also learned a lot about how to strategically structure our code system (having a background.js to run firebase database updates
## Accomplishments that I'm proud of
Veni, vidi, vici.
We came, we saw, we conquered.
## What I learned
We all learned that there are multiple ways to create a product to solve a problem.
## What's next for Heart of Gold
Heart of Gold has a lot of possibilities: partnering with companies that want to advertise to users and social good organizations, making recommendations to users on charities as well as places to shop, game-ify the experience, expanding capabilities of what a user could do with the round up money they accumulate. Before those big dreams, cleaning up the infrastructure would be very important too.
|
## Inspiration
After reading a study that stated $62 million in pennies are accidentally thrown out each year, we were curious about how this money could have instead been used to benefit society. As such, we decided to create an app that allows users to use their leftover change to make an actual change for others.
## What it does
change4change promotes charitable contributions through each of your purchases. All your transactions are rounded to the nearest dollar, with the difference going towards a charity of your choice. In case you’re uncertain of which charity to support, the app has built-in search capabilities that allow you to either search for charities by name or by category.
## How we built it
Through Android Studio, Google Firebase, and the CapitalOne Hackathon API, we created our functional mobile app. Firebase provided a secure database for storing user credentials, while the CapitalOne Hackathon API enabled easy access to banking simulations to power our features. Android’s handy native UI elements allows us to create a sleek front-end to present to the user.
## Challenges we ran into
As first-time developers on Android, we spent some time learning to work within the platform’s limitations. Obtaining a database of charities was also a challenge which we solved by scraping websites and processing data using custom python scripts to generate the database. Another challenge was configuring Firebase for Android to allow for authorization and data-storing.
## Accomplishments that we're proud of
We are proud of our app’s functionalities since they are modularized and well-designed; as such, user experience is streamlined and simple. Additionally, we are satisfied with having developed a pragmatic charity search functionality by applying data science concepts and overcoming certain Android limitations. Additionally, we are happy with our ability to develop a sleek interface design that is appealing to users.
## What we learned
Since many of us were new to Android development, we learnt the fundamentals of Android Studio and Java. Additionally, we had to learn how to use Firebase to authenticate user credentials and store user information securely.
## What's next for change4change
We plan on implementing functionality for real banking institutions and potentially releasing this product to the Google Play Store. Additionally, we are looking into possibly rebuilding the app to be more scalable for larger operations.
|
## Inspiration
We wanted a new type of biometric authentication that could potentially be fun to utilize.
## What it does
Records the gait of the person wearing the sock and tracks the gait over time to match it with the gait of the respective user, allowing for authentication.
## How we built it
We used an Arduino and Raspberry Pi with sensors attached to various areas of the foot onto the sock with tape. With python and the Arduino, we gather data and graph it utilizing numpy and use Fourier Transforms to compare the graphs and match the gaits to allow for authentication. The backend and frontend are developed in node.js with storage and authentication done with Firebase.
|
partial
|
## Inspiration
## What it does
The leap motion controller tracks hand gestures and movements like what an actual DJ would do (raise/lower volume, cross-fade, increase/decrease BPM, etc.) which translate into the equivalent in the VirtualDJ software. Allowing the user to mix and be a DJ without touching a mouse or keyboard. Added to this is a synth pad for the DJ to use.
## How we built it
We used python to interpret gestures using Leap Motion and translating them into how a user in VirtualDJ would do that action using the keyboard and mouse. The synth pad we made using an Arduino and wiring it to 6 aluminum "pads" that make sounds when touched.
## Challenges we ran into
Creating all of the motions and make sure they do not overlap was a big challenge. The synth pad was challenging to create also because of lag problems that we had to fix by optimizing the C program.
## Accomplishments that we're proud of
Actually changing the volume in the VirtualDJ using leap motion. That was the first one we made work.
## What we learned
Using the Leap Motion, learned how to wire an arduino to create a MIDI synthesizer.
## What's next for Tracktive
Sell to DJ Khaled! Another one.
|
## Inspiration
(<http://televisedrevolution.com/wp-content/uploads/2015/08/mr_robot.jpg>)
If you watch Mr. Robot, then you know that the main character, Elliot, deals with some pretty serious mental health issues. One of his therapeutic techniques is to write his thoughts on a private journal. They're great... they get your feelings out, and acts as a point of reference to look back to in the future.
We took the best parts of what makes a diary/journal great, and made it just a little bit better - with Indico. In short, we help track your mental health similar to how FitBit tracks your physical health. By writing journal entries on our app, we automagically parse through the journal entries, record your emotional state at that point in time, and keep an archive of the post to aggregate a clear mental profile.
## What it does
This is a FitBit for your brain. As you record entries about your live in the private journal, the app anonymously sends the data to Indico and parses for personality, emotional state, keywords, and overall sentiment. It requires 0 effort on the user's part, and over time, we can generate an accurate picture of your overall mental state.
The posts automatically embeds the strongest emotional state from each post so you can easily find / read posts that evoke a certain feeling (joy, sadness, anger, fear, surprise). We also have a analytics dashboard that further analyzes the person's longterm emotional state.
We believe being cognizant of one self's own mental health is much harder, and just as important as their physical health. A long term view of their emotional state can help the user detect sudden changes in the baseline, or seek out help & support long before the situation becomes dire.
## How we built it
The backend is built on a simple Express server on top of Node.js. We chose React and Redux for the client, due to its strong unidirectional data flow capabilities, as well as the component based architecture (we're big fans of css-modules). Additionally, the strong suite of redux middlewares such as sagas (for side-effects), ImmutableJS, and reselect, helped us scaffold out a solid, stable application in just one day.
## Challenges we ran into
Functional programming is hard. It doesn't have any of the magic that two-way data-binding frameworks come with, such as MeteorJS or AngularJS. Of course, we made the decision to use React/Redux being aware of this. When you're hacking away - code can become messy. Functional programming can at least prevent some common mistakes that often make a hackathon project completely unusable post-hackathon.
Another challenge was the persistance layer for our application. Originally, we wanted to use MongoDB - due to our familiarity with the process of setup. However, to speed things up, we decided to use Firebase. In hindsight, it may have cause us more trouble - since none of us ever used Firebase before. However, learning is always part of the process and we're very glad to have learned even the prototyping basics of Firebase.
## Accomplishments that we're proud of
* Fully Persistant Data with Firebase
* A REAL, WORKING app (not a mockup, or just the UI build), we were able to have CRUD fully working, as well as the logic for processing the various data charts in analytics.
* A sweet UI with some snazzy animations
* Being able to do all this while having a TON of fun.
## What we learned
* Indico is actually really cool and easy to use (not just trying to win points here). Albeit it's not always 100% accurate, but building something like this without Indico would be extremely difficult, and similar apis I've tried is not close to being as easy to integrate.
* React, Redux, Node. A few members of the team learned the expansive stack in just a few days. They're not experts by any means, but they definitely were able to grasp concepts very fast due to the fact that we didn't stop pushing code to Github.
## What's next for Reflect: Journal + Indico to track your Mental Health
Our goal is to make the backend algorithms a bit more rigorous, add a simple authentication algorithm, and to launch this app, consumer facing. We think there's a lot of potential in this app, and there's very little (actually, no one that we could find) competition in this space.
|
## Inspiration
We were inspired by the story of the large and growing problem of stray, homeless, and missing pets, and the ways in which technology could be leveraged to solve it, by raising awareness, adding incentive, and exploiting data.
## What it does
Pet Detective is first and foremost a chat bot, integrated into a Facebook page via messenger. The chatbot serves two user groups: pet owners that have recently lost their pets, and good Samaritans that would like to help by reporting. Moreover, Pet Detective provides monetary incentive for such people by collecting donations from happily served users. Pet detective provides the most convenient and hassle free user experience to both user bases. A simple virtual button generated by the chatbot allows the reporter to allow the bot to collect location data. In addition, the bot asks for a photo of the pet, and runs computer vision algorithms in order to determine several attributes and match factors. The bot then places a track on the dog, and continues to alert the owner about potential matches by sending images. In the case of a match, the service sets up a rendezvous with a trusted animal care partner. Finally, Pet Detective collects data on these transactions and reports and provides a data analytics platform to pet care partners.
## How we built it
We used messenger developer integration to build the chatbot. We incorporated OpenCV to provide image segmentation in order to separate the dog from the background photo, and then used Google Cloud Vision service in order to extract features from the image. Our backends were built using Flask and Node.js, hosted on Google App Engine and Heroku, configured as microservices. For the data visualization, we used D3.js.
## Challenges we ran into
Finding the write DB for our uses was challenging, as well as setting up and employing the cloud platform. Getting the chatbot to be reliable was also challenging.
## Accomplishments that we're proud of
We are proud of a product that has real potential to do positive change, as well as the look and feel of the analytics platform (although we still need to add much more there). We are proud of balancing 4 services efficiently, and like our clever name/logo.
## What we learned
We learned a few new technologies and algorithms, including image segmentation, and some Google cloud platform instances. We also learned that NoSQL databases are the way to go for hackathons and speed prototyping.
## What's next for Pet Detective
We want to expand the capabilities of our analytics platform and partner with pet and animal businesses and providers in order to integrate the bot service into many different Facebook pages and websites.
|
winning
|
## Inspiration
The current landscape of data aggregation for ML models relies heavily on centralized platforms, such as Roboflow and Kaggle. This causes an overreliance on invalidated human-volunteered data. Billions of dollars worth of information is unused, resulting in unnecessary inefficiencies and challenges in the data engineering process. With this in mind, we wanted to create a solution.
## What it does
**1. Data Contribution and Governance**
DAG operates as a decentralized and autonomous organization (DAO) governed by smart contracts and consensus mechanisms within a blockchain network. DAG also supports data annotation and enrichment activities, as users can participate in annotating and adding value to the shared datasets.
Annotation involves labeling, tagging, or categorizing data, which is increasingly valuable for machine learning, AI, and research purposes.
**2. Micropayments in Cryptocurrency**
In return for adding datasets to DAG, users receive micropayments in the form of cryptocurrency. These micropayments act as incentives for users to share their data with the community and ensure that contributors are compensated based on factors such as the quality and usefulness of their data.
**3. Data Quality Control**
The community of users actively participates in data validation and quality assessment. This can involve data curation, data cleaning, and verification processes. By identifying and reporting data quality issues or errors, our platform encourages everyone to actively participate in maintaining data integrity.
## How we built it
DAG was used building Next.js, MongoDB, Cohere, Tailwind CSS, Flow, React, Syro, and Soroban.
|
## Inspiration
81% of Americans today feel a lack of control over their own data. Customers today can’t trust that companies won’t store or use their data maliciously, nor can they trust data communication channels. However, many companies need access to user data to offer extensive proprietary analyses. This results in seemingly clashing objectives: how can we get access to state-of-the-art proprietary models without having to fork over our data to companies?
What if we can have our cake and eat it too?
## What it does
We use end-to-end encryption offered by The @ Company to encrypt user data and include a digital signature before we send it over an insecure communication channel. Using public-key cryptography, the server can verify the digital signature and decrypt user data after receiving it. This prevents any malicious third party from accessing the data through exploiting communications between IoT devices.
In order to prevent companies from having access to user data, we use a method known as Fully Homomorphic Encryption (FHE) to perform operations on encrypted user data without needing to decrypt it first. Therefore, we encrypt data on the user-side, send over the encrypted data to the server, the server performs operations on the encrypted data without decrypting it, then sends the result in ciphertext back to the user. Thus, we achieve access to state-of-the-art models saved on cloud servers without having to give up user data in plaintext to potentially untrustworthy companies.
## How we built it
We used Flutter as our frontend, and trained two different machine learning algorithms in the backend. From our frontend, we encrypt our user data with FHE, and send it over an encrypted channel using services provided by The @ Company. Our backend runs a simple convolutional neural network (CNN) over the encrypted data *without* decrypting it, and sends the encrypted result back. Finally, the application decrypts the result, and displays it to the user.
## Challenges we ran into
Currently, existing FHE schemes can only provide linear operators. However, nonlinearities are crucial to implementing any sort of convolutional neural networks (CNN’s). After doing [some research](https://arxiv.org/pdf/2106.07229.pdf), we initially decided to use polynomial approximations of nonlinear functions, but at the cost of greatly increasing inference error (on CIFAR-10, the approximation models produced an accuracy of 77.5% vs. ~95% on state-of-the-art models).
Again, can we have our cake and eat it too? Yes we can! We employ a scheme inspired by [the latest FHE research](https://priml-workshop.github.io/priml2019/papers/PriML2019_paper_28.pdf) that compresses high-resolution images and leverages Taylor series expansion of smooth ReLU with novel min-max normalization to bound error to approximately 3% off state-of-the-art, with cleverly formulated low-rank approximations.
This scheme solves the dual problem of heavily reducing communication overhead in other FHE schemes when faced with number of pixels growing quadratically as edge length increases.
## Accomplishments that we're proud of
Not only did we get a demo up and running in a day, we also learned encryption algorithms from scratch and was able to implement it in our project idea. In addition to the technical application details that we had to implement, we also had to read different research papers, test out different models and algorithms, and integrate them into our project.
## What we learned
### Homomorphic Functions
Homomorphic functions are a structure-preserving map between two algebraic structures of the same type. This means that given a function on the plaintext space, we can find a function on the encrypted ciphertext space that would result in the same respective outputs.
For example, if we are given two messages m1 and m2, then homomorphism in addition states that encrypting m1 and m2 and then adding them together is the same as encrypting the sum of m1 and m2.
### Fully Homomorphic Encryption
Fully homomorphic encryption uses homomorphic functions to ensure that data is stored in a trustworthy space. The user first encrypts their data, sends the encrypted data to a server, then the server performs computations on the encrypted data using homomorphic functions, and sends the result back to the user. Once decrypted by the user, the user will have their desired result.
The benefits of FHE is that there is no need for trusted third parties to get proprietary data analysis. The user doesn’t have to trust the server with their data to get their results. In addition, it eliminates the tradeoff between data usability and privacy, as there is no need to mask or drop any features to preserve data privacy.
## What's next for CypherAI
In our project, we demonstrated two use cases for our product in different company settles: home monitoring for security cameras, and healthcare systems. In the future, we are looking forward to expanding this beyond to other companies!
|
## ✨ Inspiration
Quarantining is hard, and during the pandemic, symptoms of anxiety and depression are shown to be at their peak 😔[[source]](https://www.kff.org/coronavirus-covid-19/issue-brief/the-implications-of-covid-19-for-mental-health-and-substance-use/). To combat the negative effects of isolation and social anxiety [[source]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7306546/), we wanted to provide a platform for people to seek out others with similar interests. To reduce any friction between new users (who may experience anxiety or just be shy!), we developed an AI recommendation system that can suggest virtual, quarantine-safe activities, such as Spotify listening parties🎵, food delivery suggestions 🍔, or movie streaming 🎥 at the comfort of one’s own home.
## 🧐 What it Friendle?
Quarantining alone is hard😥. Choosing fun things to do together is even harder 😰.
After signing up for Friendle, users can create a deck showing their interests in food, games, movies, and music. Friendle matches similar users together and puts together some hangout ideas for those matched. 🤝💖 I
## 🧑💻 How we built Friendle?
To start off, our designer created a low-fidelity mockup in Figma to get a good sense of what the app would look like. We wanted it to have a friendly and inviting look to it, with simple actions as well. Our designer also created all the vector illustrations to give the app a cohesive appearance. Later on, our designer created a high-fidelity mockup for the front-end developer to follow.
The frontend was built using react native.

We split our backend tasks into two main parts: 1) API development for DB accesses and 3rd-party API support and 2) similarity computation, storage, and matchmaking. Both the APIs and the batch computation app use Firestore to persist data.
### ☁️ Google Cloud
For the API development, we used Google Cloud Platform Cloud Functions with the API Gateway to manage our APIs. The serverless architecture allows our service to automatically scale up to handle high load and scale down when there is little load to save costs. Our Cloud Functions run on Python 3, and access the Spotify, Yelp, and TMDB APIs for recommendation queries. We also have a NoSQL schema to store our users' data in Firebase.
### 🖥 Distributed Computer
The similarity computation and matching algorithm is powered by a node.js app which leverages the Distributed Computer for parallel computing. We encode the user's preferences and Meyers-Briggs type into a feature vector, then compare similarity using cosine similarity. The cosine similarity algorithm is a good candidate for parallelizing since each computation is independent of the results of others.
We experimented with different strategies to batch up our data prior to slicing & job creation to balance the trade-off between individual job compute speed and scheduling delays. By selecting a proper batch size, we were able to reduce our overall computation speed by around 70% (varies based on the status of the DC network, distribution scheduling, etc).
## 😢 Challenges we ran into
* We had to be flexible with modifying our API contracts as we discovered more about 3rd-party APIs and our front-end designs became more fleshed out.
* We spent a lot of time designing for features and scalability problems that we would not necessarily face in a Hackathon setting. We also faced some challenges with deploying our service to the cloud.
* Parallelizing load with DCP
## 🏆 Accomplishments that we're proud of
* Creating a platform where people can connect with one another, alleviating the stress of quarantine and social isolation
* Smooth and fluid UI with slick transitions
* Learning about and implementing a serverless back-end allowed for quick setup and iterating changes.
* Designing and Creating a functional REST API from scratch - You can make a POST request to our test endpoint (with your own interests) to get recommended quarantine activities anywhere, anytime 😊
e.g.
`curl -d '{"username":"turbo","location":"toronto,ca","mbti":"entp","music":["kpop"],"movies":["action"],"food":["sushi"]}' -H 'Content-Type: application/json' ' https://recgate-1g9rdgr6.uc.gateway.dev/rec'`
## 🚀 What we learned
* Balancing the trade-off between computational cost and scheduling delay for parallel computing can be a fun problem :)
* Moving server-based architecture (Flask) to Serverless in the cloud ☁
* How to design and deploy APIs and structure good schema for our developers and users
## ⏩ What's next for Friendle
* Make a web-app for desktop users 😎
* Improve matching algorithms and architecture
* Adding a messaging component to the app
|
partial
|
Worldwide, there have been over a million cases of animal cruelty over the past decade. With people stuck at home, bottling up their frustrations. Moreover, due to COVID, these numbers aren’t going down anytime soon. To tackle these issues, we built this game, which has features like:
* Reminiscent of city-building simulators like Sim-City.
* Build and manage your animal shelter.
* Learn to manage finances, take care of animals, set up adoption drives.
* Grow and expand your shelter by taking in homeless animals, and giving them a life.
The game is built with Unity game engine, and runs on WebGL, utilizing the power of Wix Velo, which allows us to quickly host and distribute the game across platforms from a single code base.
|
## Introduction
[Best Friends Animal Society](http://bestfriends.org)'s mission is to **bring about a time when there are No More Homeless Pets**
They have an ambitious goal of **reducing the death of homeless pets by 4 million/year**
(they are doing some amazing work in our local communities and definitely deserve more support from us)
## How this project fits in
Originally, I was only focusing on a very specific feature (adoption helper).
But after conversations with awesome folks at Best Friends came a realization that **bots can fit into a much bigger picture in how the organization is being run** to not only **save resources**, but also **increase engagement level** and **lower the barrier of entry points** for strangers to discover and become involved with the organization (volunteering, donating, etc.)
This "design hack" comprises of seven different features and use cases for integrating Facebook Messenger Bot to address Best Friends's organizational and operational needs with full mockups and animated demos:
1. Streamline volunteer sign-up process
2. Save human resource with FAQ bot
3. Lower the barrier for pet adoption
4. Easier donations
5. Increase visibility and drive engagement
6. Increase local event awareness
7. Realtime pet lost-and-found network
I also "designed" ~~(this is a design hack right)~~ the backend service architecture, which I'm happy to have discussions about too!
## How I built it
```
def design_hack():
s = get_sketch()
m = s.make_awesome_mockups()
k = get_apple_keynote()
return k.make_beautiful_presentation(m)
```
## Challenges I ran into
* Coming up with a meaningful set of features that can organically fit into the existing organization
* ~~Resisting the urge to write code~~
## What I learned
* Unique organizational and operational challenges that Best Friends is facing
* How to use Sketch
* How to create ~~quasi-~~prototypes with Keynote
## What's next for Messenger Bots' Best Friends
* Refine features and code :D
|
## Inspiration
I love sky.
## What it does
## How I built it
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for MySky Virtual Reality Planetarium
|
winning
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.