anchor
stringlengths 159
16.8k
| positive
stringlengths 184
16.2k
| negative
stringlengths 167
16.2k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## TLDR
We are a developer toolkit which provides a decentralized database as a service where users can use our endpoints as well as NoCode tool where users can upload their data in a decentralised manner.
## Inspiration
👋🏻 With the advent of Web3, there are now decentralized **ways to store data that are a hundred times cheaper than Web2 solutions**. Such methods are equally, if not more secure, dependable and scalable.
😩 However, **using them remains vastly more technically challenging** and time consuming than traditional alternatives.
🤓 As Web3 developers, even as we've sought to push the envelope with frontier blockchain technology, we've **run into limitations** with Web2 solutions *time and time again*.
## What it does
🫡 ControlDB’s mission is to **bridge this gap between cost and performance**, allowing developers to use exponentially cheaper IPFS distributed file storage, while **circumventing much of the complexity and limitations** that normally come with it.
🚀 IPFS, the InterPlanetary File System, is a peer-to-peer file sharing network. Files are sharded across multiple nodes. Building on this protocol allows ControlDB to fundamentally **ensure user data remains decentralized, preventing a single source of failure because of IPFS' distributed nature.**
❌ A limitation of IPFS, however, is that shards of files are made public.
✅ To overcome this, ControlDB adds an additional layer of encryption to further increase user privacy. In the event that a user's IPFS hash is intercepted, an attacker will still have to circumvent the additional layer of advanced encryption, keeping users' data safe.
❌ IPFS also has no inherent read or write controls, necessary for handling sensitive data across multiple agents, for example in healthcare.
✅ Here ControlDB introduces a permission layer onto files, allowing admins to designate different levels of file access across multiple stakeholders with varied roles.
❌ IPFS is also accessed through command line interface, making it hard to debug and understand.
✅ In contrast ControlDB adds a user-friendly GUI on top of IPFS’ command line based function, allowing non-technical users to access its features. A simplified API makes it easy and quick for developers to implement ControlDB’s file storage into their programs. Doing so circumvents the need for developers to build a new IPFS pipeline, reducing the barrier to entry for smaller and/or less experienced teams.
✅ Our architecture is designed to be plug-and-play, which means that you can easily switch between different storage endpoints, such as IPFS or Estuary or any other option where you just need to add a configuration file, depending on your needs. This provides you with the flexibility to use the database of your choice, while still taking advantage of our platform's powerful features.
🔥🔥🔥 Altogether ControlDB will **save developers significant amounts of time, money and technical headache,** allowing them to scale usage from simple to complex use cases as their software grows.
😌😌😌 It will also **improve the availability of files, reduce privacy concerns, and grant users increased control over their data.**
## Demo
[](http://www.youtube.com/watch?v=WxG2gmMB57M)
## How we built it
🛠️ ControlDB was built with TypeScript and Golang. Its MVP consists of a **fully functional backend with full encryption, decryption and sharding across multiple nodes implemented, as well as a fully functional frontend with login, file upload and retrieval.** We even built a working user interface.
🤝🏻 Yes, everything is *fully working*. 🫡🫡🫡
### Technical Architecture
**Overall Architecture**


We have an API Server which interacts with the Node Cluster which is a wrapper to any decentralised storage system. We spun up 3 IPFS nodes locally which are used as the decentralised storage engine.
**No code frontend tool**
Our front end allows users to easily upload their data onto decentralised file storage quickly without any hassle or code.
**API Server**
API Server: This layer accepts user requests via our Frontend as well as the HTTP API requests. The permissions are fetched out of payload which user sent and then the main data is used for the further part.
**Middleware Node Cluster**
Middleware Node: A search engine is developed in this layer using B+ Trees for faster retrieval of data and lookup. As the data is received from the user, we encrypt it using AES-256 and generate the encryption key. A unique id is also generated in this node which will be used as a key for insertion in the B+ Tree. The package of permissions, encryption key and IPFS Hash is generated for the insertion in B+ Trees.
**IPFS Nodes**
We have started 3 IPFS nodes which interacts with our middleware node.
Firstly, IPFS is installed on the operating system by following the official IPFS documentation. This step provides the necessary software to create and manage the IPFS cluster.
Next, IPFS is initialized on each node using the ipfs init command, which creates configuration files for ipfs-cluster-ctl to interact with the nodes. This step is essential to ensure that the IPFS nodes are ready to be managed by the cluster.
A configuration file is created using the default docker-compose.yml file, or a custom configuration file can be used to set up the cluster. The configuration file specifies how many IPFS cluster nodes and Kubo nodes are to be created, which determines the cluster's capacity and the method of interaction with the nodes.
The Kubo nodes sit on top of the IPFS cluster nodes and provide a way for users to interact with the cluster via HTTP API or IPFS client. This interaction enables users to manage and control the IPFS cluster by creating endpoints that allow various actions to be performed on the nodes.
Once the cluster is set up, users can create endpoints to perform various actions on the cluster nodes. These endpoints can be migrated as necessary, enabling the cluster to be flexible and adaptable to changing requirements.
## Challenges we ran into
🧐 At first, the team struggled with ideation, and considered a broad spectrum of potential pathways. There was a struggle to find consensus. We also had little to no experience with IPFS, and no idea how to architect or scope it down. This was overcome with much discussion and research.
Challenges we ran into:
1. Developing the ACL and integrating it with the IPFS.
2. Starting the local IPFS architecture.
3. As we were running the local IPFS server, we faced the CORS issue in the IPFS nodes which prevented us to access from different origins.
4. Developing B+ search tree as per our use case of adding multiple values.
5. Integration of different components we built.
6. Integration with Estuary APIs for using them as the storage engine along with IPFS.
## Accomplishments that we're proud of
🙏🏻 We are incredibly proud to achieve a fully functional MVP within 36 hours. Integrating IPFS from zero experience, as well as permission and encryption layers, ensuring everything runs stably, has been a huge accomplishment within the short time we've had.
## What we learned
💯 This was a great deep dive into the world of decentralized data storage, IPFS and building permission layers. We also learnt to build synergy as a team, and how to build a strong team together, leveraging each individual's unique strengths.
## What's next for ControlDB
🥳 We'd like to change encryption keys over time, reducing vulnerabilities to attacks.
|
## Inspiration
In traditional finance, banks often swap cash flows from their assets for a fixed period of time. They do this because they want to hold onto their assets long-term, but believe their counter-party's assets will outperform their own in the short-term. We decided to port this over to DeFi, specifically Uniswap.
## What it does
Our platform allows for the lending and renting of Uniswap v3 liquidity positions. Liquidity providers can lend out their positions for a short amount of time to renters, who are able to collect fees from the position for the duration of the rental. Lenders are able to both hold their positions long term AND receive short term cash flow in the form of a lump sum ETH which is paid upfront by the renter. Our platform handles the listing, selling and transferring of these NFTs, and uses a smart contract to encode the lease agreements.
## How we built it
We used solidity and hardhat to develop and deploy the smart contract to the Rinkeby testnet. The frontend was done using web3.js and Angular.
## Challenges we ran into
It was very difficult to lower our gas fees. We had to condense our smart contract and optimize our backend code for memory efficiency. Debugging was difficult as well, because EVM Error messages are less than clear. In order to test our code, we had to figure out how to deploy our contracts successfully, as well as how to interface with existing contracts on the network. This proved to be very challenging.
## Accomplishments that we're proud of
We are proud that in the end after 16 hours of coding, we created a working application with a functional end-to-end full-stack renting experience. We allow users to connect their MetaMask wallet, list their assets for rent, remove unrented listings, rent assets from others, and collect fees from rented assets. To achieve this, we had to power through many bugs and unclear docs.
## What we learned
We learned that Solidity is very hard. No wonder blockchain developers are in high demand.
## What's next for UniLend
We hope to use funding from the Uniswap grants to accelerate product development and add more features in the future. These features would allow liquidity providers to swap yields from liquidity positions directly in addition to our current model of liquidity for lump-sums of ETH as well as a bidding system where listings can become auctions and lenders rent their liquidity to the highest bidder. We want to add different variable-yield assets to the renting platform. We also want to further optimize our code and increase security so that we can eventually go live on Ethereum Mainnet. We also want to map NFTs to real-world assets and enable the swapping and lending of those assets on our platform.
|
## 🪐 Inspiration 🪐
As four female developers, we wanted to create a hack that could make a noticeable difference in the world and still be enticing enough to use. We all felt drawn to the concept of helping women gain financial independence, so Plan It Girlboss was born.
Specifically, we wanted to target younger women who might be curious about starting their entrepreneurship journey. Throughout the project we’ve used ‘young people’ terms, with our most widely used term being the word ‘Girlboss”. The term ‘Girlboss’ has been an empowering term for female CEOs in the past and has recently become re-popularized with the millennial and gen z generations.
Our name, “Plan It Girlboss”, is a play on the words ‘planet’ and ‘plan it’; with planet/space being our theme and ‘plan it’ referencing planning your future business or financial plans.
We were heavily inspired by the following statistics:
* [Women live paycheck to paycheck five times more often than men](https://www.cnbc.com/2019/10/14/women-live-paycheck-to-paycheck-roughly-5-times-as-often-as-men.html). We want to give women the opportunity to learn how to make money for themselves and create a solid budget to help reduce this statistic.
* [In 2020, women made 80.4% of what men earned, and have NEVER earned more than 83%](https://www.fool.com/the-ascent/research/gender-pay-gap-statistics/). It's time for women to overcome the very real wage gap. We want resources to be available to women who are ready to take the step towards profitable entrepreneurship in their financial journey.
* [Companies that have women in executive leadership roles are statistically more profitable](https://hbr.org/2016/02/study-firms-with-more-women-in-the-c-suite-are-more-profitable). Women **can be** and **should be** leaders. We want to give women the opportunity to put themselves in these positions by starting their own companies.
## 🌟 What it does 🌟
Plan It Girlboss is a women-targeted website that has 3 main focuses: budgeting, connecting, and learning, specifically for young entrepreneurs. Additionally, enjoy a randomly generated quote or question on the home page to inspire you to keep progressing towards your goals.
###### **BUDGETING BABE**
With the budgeting portion of the website, you currently can select one of 3 years (2021, 2020, and 2019). Once selected, you’ll be taken to the physical budget where you can input your budget for the year and your expenses and can also remove previous expenses if needed. It also clearly demonstrates what you’ve spent out of the budget you’ve set and how much money you have remaining to spend.
###### **CONNECTION QUEEN**
With the connection portion of the website, you currently can input a specific city/location/business into the search bar and have the map instantly take you there. This is to help you seamlessly connect with fellow Girlbosses in your area.
###### **LEARNING LADY**
With the learning portion of the website, you currently can select one of four articles written by our development team to read to inform yourself about relevant topics that could impact your business, your day to day life, or are socially relevant. The articles are all tagged to make it easy to visually filter through what you’d like to read today so that you’re making the most out of your day and your time.
## 🌎 How we built it 🌎
This application was built with React, HTML, CSS, JavaScript using Glitch. All code obtained from public open-source materials has been cited within the project's README.
All of the art assets were created using Canva.
## 💫 Challenges we ran into 💫
* It was difficult for us to use React at first as it is a relatively new framework for all of us.
* A lot of time was spent on persisting the data while navigating between different routes.
* Because of the 36 hours time limit, we were unable to implement more complex features such as allowing users to log in and save their budgeting summaries and allowing users to connect with actual mentors.
* Styling elements proved to be a challenge as most members are unfamiliar with CSS.
## 🌗 Accomplishments that we're proud of 🌗
* This was most of our developer’s first hackathon! We’re happy we accomplished it within the timeframe and submitted something we were proud of.
* We got the budget page to work with different years!
* We had fun and got to learn more about each other, as well as have many laughs along the way.
## 🌠 What we learned 🌠
* How to use the State Hook to change the state of components.
* How to use props to allow data persistence between different routes.
* How to install and import npm packages that allow us to program with more ease.
* How to fetch data from an API.
* How to work together in a virtual hackathon.
## 🌌 What's next for Plan It Girlboss 🌌
We are excited for the future! There are quite a few features that could be added, but the main ones would be:
* A working login page
* A month-to-month budget system
* A chatting system to chat with fellow Girlbosses
* More articles in the ‘Learn’ section
If any investors are interested in this project, there are quite a few features that could be implemented so that you can make money:
* Top-quality, paid lessons for the learn page
* Paid mentors Girlbosses can connect with
* A monthly or annually billed subscription service to access all paid features at once
Thank you for reviewing our hack. We hope you have a great rest of your day!
|
winning
|
## 💡 Inspiration 💡
So many people around the world are fatally injured and require admission to multiple hospitals in order to receive life-changing surgery/procedures. When patients are transferred from one hospital to another, it is crucial for their medical information to be safely transferred as well.
## ❓ What it does ❓
Hermes is a secure, HIPAA-compliant app that allows hospital admin to transfer vital patient data to other domestic and/or international hospitals. The user inputs patient data, uploads patient files, and sending them securely to a hospital.
## ⚙️ How we built it ⚙️
We used the React.JS framework to build the web application. We used Javascript for the backend and HTML/CSS/JS for the frontend. We called the Auth0 API for authentication and Botdoc API for encrypted file sending.
## 🚧 Challenges we ran into 🚧
Figuring out how to send encrypted files through Botdoc was challenging but also critical to our project.
## ✨Accomplishments that we're proud of ✨
We’re proud to have built a dashboard-like functionality within 24 hours.
## 👩🏻💻 What we learned 👩🏻💻
We learned that authentication on every page is critical for an app like this that would require uploaded patient information from hospital admins. Learning to use Botdoc was also fruitful when it comes to sending encrypted messages/files.
|
## Inspiration
So, like every Hackathon we’ve done in the past, we wanted to build a solution based on the pain points of actual, everyday people. So when we decided to pursue the Healthtech track, we called the nurses and healthcare professionals in our lives. To our surprise, they all seemed to have the same gripe – that there was no centralized system for overviewing the procedures, files, and information about specific patients in a hospital or medical practice setting. Even a quick look through google showed that there wasn’t any new technology that was really addressing this particular issue. So, we created UniMed - united medical - to offer an innovate alternative to the outdated software that exists – or for some practices, pen and paper.
While this isn’t necessarily the sexiest idea, it’s probably one of the most important issues to address for healthcare professionals. Looking over the challenge criteria, we couldn’t come up with a more fitting solution – what comes to mind immediately is the criterion about increasing practitioner efficiency. The ability to have a true CMS – not client management software, but CARE management software – eliminates any need for annoying patients with a barrage of questions they’ve answered a hundred times, and allows nurses and doctors to leave observations and notes in a system where they can be viewed from other care workers going forward.
## What it does
From a technical, data-flow perspective, this is the gist of how UniMed works: Solace connects our React-based front end to our database. While we normally would have a built a SQL database or perhaps gone the noSQL route and leveraged mongoDB, due to time constraints we’re using JSON for simplicities sake. So while JSON is acting, typically, like a REST API, we’re pulling real-time data with Solace’s functionality. Any time an event-based subscription is called – for example, a nurse updates a patient’s records reporting that their post-op check-up went well and they should continue on their current dosage of medication – that value, in this case a comment value, is passed to that event (updating our React app by populating the comments section of a patient’s record with a new comment).
## How we built it
We all learned a lot at this hackathon – Jackson had some Python experience but learned some HTML5 to design the basic template of our log-in page. I had never used React before, but spent several hours watching youtube videos (the React workshop was also very helpful!) and Manny mentored me through some of the React app creation. Augustine is a marketing student but it turns out he has a really good eye for design, and he was super helpful in mockups and wireframes!
## What's next for UniMed
There are plenty of cool ideas we have for integrating new features - the ability to give patients a smartwatch that monitors their vital signs and pushes that bio-information to their patient "card" in real time would be super cool. It would be great to also integrate scheduling functionality so that practitioners can use our program as the ONLY program they need while they're at work - a complete hub for all of their information and duties!
|
## Inspiration 💡
**HERMES** was inspired by the urgent need to address multiple critical issues plaguing the healthcare industry, with a primary focus on improving the lives of individuals and communities. The following key factors served as driving forces:
1. **Inequity in Healthcare Access:** Across the world, disparities in healthcare access persist, with marginalized communities and underserved populations facing significant hurdles in receiving quality care. *HERMES* aims to bridge this gap by providing a universally accessible and interconnected healthcare system. By leveraging technology and digital records, it ensures that every individual, regardless of their socioeconomic background or geographic location, has access to essential health services.
2. **Fragmented Health Records:** Traditional health records are often fragmented and scattered across different healthcare providers, making it challenging for patients and doctors to access a comprehensive view of a patient's medical history. *HERMES* offers a unified platform where all medical information is consolidated as NFTs, ensuring that health records are complete, up-to-date, and easily accessible to authorized parties. This eliminates the inefficiencies and potential risks associated with incomplete records.
3. **Lack of Doctor-Patient Communication:** Effective communication between patients and healthcare providers is essential for improving health outcomes. The absence of convenient and real-time communication can lead to miscommunication, delayed diagnoses, and suboptimal care. *HERMES* provides a secure channel for seamless communication, enabling patients and doctors to connect, share information, and make informed decisions collaboratively.
4. **Data Security:** Data security and privacy in healthcare are paramount. Breaches in health data can have severe consequences for individuals, leading to identity theft and other potential risks. *HERMES* incorporates blockchain technology to establish a robust security framework, ensuring that personal health data remains confidential and immutable. This inspires trust in the system, making patients more comfortable sharing their information.
5. **Economic Barrier/Divide:** The cost of quality healthcare and medication is a significant economic barrier for many individuals. *HERMES* tackles this issue by fostering connections with pharmacies and specialists, facilitating online consultations, and enabling efficient management of health records. By lowering barriers to access and providing more cost-effective healthcare solutions, *HERMES* contributes to reducing economic disparities in healthcare.
## What it does 🏥
**HERMES** is an innovative digital healthcare platform designed to provide a wide range of services and address the complex issues in healthcare. It does the following:
* **Data Storage:** It stores various medical data types, including eye tests, X-rays, vaccines, blood tests, doctor's office notes, wellness checkup records, past appointments, prescription history, and health digital credentials as non-fungible tokens (NFTs) on the XRP Ledger. An off-chain Cockroach DB is used to efficiently manage and retrieve this data.
* **Data Security:** Data is secured on the blockchain with hash links to the Cockroach DB, ensuring fast access while prioritizing data safety.
* **General Medical Advice and Queries:** Patients can seek general medical advice and ask health-related questions. An AI system powered by OpenAI, with integration through Minds DB, assists in providing accurate and personalized responses.
* **AI Report Summarization and Recommendation:** The platform uses Vectara AI for summarizing medical reports and offering recommendations.
* **Image Disease Diagnosis:** It provides the capability to diagnose diseases from X-rays, scans, and other medical images, along with treatment recommendations.
* **Doctor-Patient Communication:** A video chat system and text communication are integrated, allowing for real-time interaction between doctors and patients. Hume AI is used for analyzing sentiment and tone in the communication.
* **Patient Services Recommendation:** The platform offers recommendations for hospitals, pharmacies, and treatment centers based on the patient's condition and needs.
## Tech: 🛠️
**HERMES** is built by combining several key technologies:
* XRP Ledger for NFT storage.
* Cockroach DB for off-chain database management.
* OpenAI and Minds DB for AI-powered medical advice and responses.
* Vectara AI for medical report summarization and recommendations.
* Integration of video chat and text communication systems for doctor-patient interaction.
## Challenges: 🌍
Building *HERMES* presented several challenges, including:
* Ensuring the security of medical data on the blockchain while maintaining fast accessibility.
* Developing and training AI models for accurate medical advice and diagnosis.
* Integrating various AI models and databases to work seamlessly together.
* Ensuring compliance with healthcare regulations and data privacy standards.
## Accomplishments: 🏥
We're proud of the following accomplishments:
* Successful integration of a wide range of cutting-edge technologies to create a comprehensive healthcare platform.
* Building a secure, efficient, and accessible system for storing and managing medical data.
* Developing AI systems for medical advice, report summarization, and sentiment analysis.
* Facilitating doctor-patient communication through video chat and text communication.
## Learnings: 📚
During the development of *HERMES*, we learned the following:
* The critical importance of data security in healthcare and how blockchain technology can enhance it.
* The potential of AI in transforming healthcare services and improving patient-doctor interactions.
* The challenges and complexities of integrating multiple technologies to create a unified healthcare platform.
## Next Steps: 🚀
The future of *HERMES* involves:
* Making the application integrated and connected with different features and services.
* Deployment of a private blockchain built on zero-knowledge proofs.
* Expanding the user base to include more patients, healthcare providers, and medical facilities.
* Continuous improvement of AI models to enhance accuracy and expand capabilities.
* Adhering to evolving healthcare regulations and data privacy standards.
* Enhancing the user experience with additional features and functionalities.
## Notes: 📝
* PowerPoint: <https://1drv.ms/p/s!ApHB9V1j-TOThIEu3RTnZ6RjM4BPQg?e=UMGVQQ>
* Demo(No Audio): <https://youtu.be/8BKdTKyvJEA>
* Pitch Video:<https://youtu.be/lZiH-TwsqcU>
* NFT-XRP is code to create NFTs using XRP Ledgers. Click Get Standby Account, wait until Seed, Account and Amount are filled. Then input link for medical data storage as NFT URI. Click MInt NFT, Get NFT.
* Cockroach, MindsDB, and Vectara Connection: 
|
partial
|
## Inspiration
At many public places, recycling is rarely a priority. Recyclables are disposed of incorrectly and thrown out like garbage. Even here at QHacks2017, we found lots of paper and cans in the [garbage](http://i.imgur.com/0CpEUtd.jpg).
## What it does
The Green Waste Bin is a waste bin that can sort the items that it is given. The current of the version of the bin can categorize the waste as garbage, plastics, or paper.
## How we built it
The physical parts of the waste bin are the Lego, 2 stepper motors, a raspberry pi, and a webcam. The software of the Green Waste Bin was entirely python. The web app was done in html and javascript.
## How it works
When garbage is placed on the bin, a picture of it is taken by the web cam. The picture is then sent to Indico and labeled based on a collection that we trained. The raspberry pi then controls the stepper motors to drop the garbage in the right spot. All of the images that were taken are stored in AWS buckets and displayed on a web app. On the web app, images can be relabeled and the Indico collection is retrained.
## Challenges we ran into
AWS was a new experience and any mistakes were made. There were some challenges with adjusting hardware to the optimal positions.
## Accomplishments that we're proud of
Able to implement machine learning and using the Indico api
Able to implement AWS
## What we learned
Indico - never done machine learning before
AWS
## What's next for Green Waste Bin
Bringing the project to a larger scale and handling more garbage at a time.
|
## Inspiration
Every year roughly 25% of recyclable material is not able to be recycled due to contamination. We set out to reduce the amount of things that are needlessly sent to the landfill by reducing how much people put the wrong things into recycling bins (i.e. no coffee cups).
## What it does
This project is a lid for a recycling bin that uses sensors, microcontrollers, servos, and ML/AI to determine if something should be recycled or not and physically does it.
To do this it follows the following process:
1. Waits for object to be placed on lid
2. Take picture of object using webcam
3. Does image processing to normalize image
4. Sends image to Tensorflow model
5. Model predicts material type and confidence ratings
6. If material isn't recyclable, it sends a *YEET* signal and if it is it sends a *drop* signal to the Arduino
7. Arduino performs the motion sent to it it (aka. slaps it *Happy Gilmore* style or drops it)
8. System resets and waits to run again
## How we built it
We used an Arduino Uno with an Ultrasonic sensor to detect the proximity of an object, and once it meets the threshold, the Arduino sends information to the pre-trained TensorFlow ML Model to detect whether the object is recyclable or not. Once the processing is complete, information is sent from the Python script to the Arduino to determine whether to yeet or drop the object in the recycling bin.
## Challenges we ran into
A main challenge we ran into was integrating both the individual hardware and software components together, as it was difficult to send information from the Arduino to the Python scripts we wanted to run. Additionally, we debugged a lot in terms of the servo not working and many issues when working with the ML model.
## Accomplishments that we're proud of
We are proud of successfully integrating both software and hardware components together to create a whole project. Additionally, it was all of our first times experimenting with new technology such as TensorFlow/Machine Learning, and working with an Arduino.
## What we learned
* TensorFlow
* Arduino Development
* Jupyter
* Debugging
## What's next for Happy RecycleMore
Currently the model tries to predict everything in the picture which leads to inaccuracies since it detects things in the backgrounds like people's clothes which aren't recyclable causing it to yeet the object when it should drop it. To fix this we'd like to only use the object in the centre of the image in the prediction model or reorient the camera to not be able to see anything else.
|
## Inspiration
* Manual sorting of garbage is a difficult and expensive process and the global waste for recyclable materials has significantly increased over the years.
* It is projected by the World Bank that in 2050, the world will generate 3.4 billion tons of waste per year which is a 70% increase compared to 2023.
* Our strong desire to reduce the negative impact of waste and maintain a sustainable environment in the next 100 years brought us together, brainstorming methods for automated garbage sorting to improve the recycling process.
* We shared knowledge about machine learning, deep learning, Google Cloud, and full-stack development skills and we decided to design a mobile app that can automatically quick-sort the garbage with the support of machine learning models and lead the user to the nearest recycling places on Google Maps.
## What it does
* Users can scan everyday objects with the scanner in QuickSort and will immediately sort things into various categories with the support of the Machine Learning Model.
* The Maps feature will lead the users to the nearest garbage recycling places on Google Maps to improve the recycling process.
* With the support of Google Cloud, QuickSort will record the total amount of garbage recycled by the users in the past year/month/week and display the data on line charts and pie charts.
## How we built it:
* Machine Learning: PyTorch, Deep Learning
* Hosting: Google Cloud, CockroachDB
* Frontend: React Native
* Backend: Node.js, Express.js, TypeScript, PostgreSQL
* UI Design: Figma
## Challenges we ran into
* Flask server setup to connect with MongoDB and Google Cloud SQL.
* Implementation of authentication with Google.
* Speed of classification using our models.
## Accomplishments that we're proud of
* Trained numerous models using PyTorch with over 20k+ images.
* Set up the Node.js server to connect with CockroachDB hosted on Google Cloud.
## What we learned
* Mobile development using React Native for device cross-compatibility.
* Learned the process of connecting to various database services (Google Cloud SQL, MongoDB, CockroachDB)
* The process of setting up a server in Flask and Express.js.
## What's next for QuickSort
* Live-feed detection for rapid detection speed to improve user experience.
* Global recycling management system to track total garbage recycled on a global scale for further research.
* Implementation of a bonus system based on the amount of garbage a user recycles to encourage them to reduce waste production everyday.
|
winning
|
## Off The Grid
Super awesome offline, peer-to-peer, real-time canvas collaboration iOS app
# Inspiration
Most people around the world will experience limited or no Internet access at times during their daily lives. We could be underground (on the subway), flying on an airplane, or simply be living in areas where Internet access is scarce and expensive. However, so much of our work and regular lives depend on being connected to the Internet. I believe that working with others should not be affected by Internet access, especially knowing that most of our smart devices are peer-to-peer Wifi and Bluetooth capable. This inspired me to come up with Off The Grid, which allows up to 7 people to collaborate on a single canvas in real-time to share work and discuss ideas without needing to connect to Internet. I believe that it would inspire more future innovations to help the vast offline population and make their lives better.
# Technology Used
Off The Grid is a Swift-based iOS application that uses Apple's Multi-Peer Connectivity Framework to allow nearby iOS devices to communicate securely with each other without requiring Internet access.
# Challenges
Integrating the Multi-Peer Connectivity Framework into our application was definitely challenging, along with managing memory of the bit-maps and designing an easy-to-use and beautiful canvas
# Team Members
Thanks to Sharon Lee, ShuangShuang Zhao and David Rusu for helping out with the project!
|
## Inspiration
We are a team of goofy engineers and we love making people laugh. As Western students (and a stray Waterloo engineer), we believe it's important to have a good time. We wanted to make this game to give people a reason to make funny faces more often.
## What it does
We use OpenCV to analyze webcam input and initiate signals using winks and blinks. These signals control a game that we coded using PyGame.
See it in action here: <https://youtu.be/3ye2gEP1TIc>
## How to get set up
##### Prerequisites
* Python 2.7
* A webcam
* OpenCV
1. [Clone this repository on Github](https://github.com/sarwhelan/hack-the-6ix)
2. Open command line
3. Navigate to working directory
4. Run `python maybe-a-game.py`
## How to play
**SHOW ME WHAT YOU GOT**
You are playing as Mr. Poopybutthole who is trying to tame some wild GMO pineapples. Dodge the island fruit and get the heck out of there!
##### Controls
* Wink left to move left
* Wink right to move right
* Blink to jump
**It's time to get SssSSsssSSSssshwinky!!!**
## How we built it
Used haar cascades to detect faces and eyes. When users' eyes disappear, we can detect a wink or blink and use this to control Mr. Poopybutthole movements.
## Challenges we ran into
* This was the first game any of us have ever built, and it was our first time using Pygame! Inveitably, we ran into some pretty hilarious mistakes which you can see in the gallery.
* Merging the different pieces of code was by-far the biggest challenge. Perhaps merging shorter segments more frequently could have alleviated this.
## Accomplishments that we're proud of
* We had a "pineapple breakthrough" where we realized how much more fun we could make our game by including this fun fruit.
## What we learned
* It takes a lot of thought, time and patience to make a game look half decent. We have a lot more respect for game developers now.
## What's next for ShwinkySwhink
We want to get better at recognizing movements. It would be cool to expand our game to be a stand-up dance game! We are also looking forward to making more hacky hackeronis to hack some smiles in the future.
|
## Background
Collaboration is the heart of humanity. From contributing to the rises and falls of great civilizations to helping five sleep deprived hackers communicate over 36 hours, it has become a required dependency in the [[email protected]](mailto:[email protected]):**hacker/life**.
Plowing through the weekend, we found ourselves shortchanged by the current available tools for collaborating within a cluster of devices. Every service requires:
1. *Authentication* Users must sign up and register to use a service.
2. *Contact* People trying to share information must establish a prior point of contact(share e-mails, phone numbers, etc).
3. *Unlinked Output* Shared content is not deep-linked with mobile or web applications.
This is where Grasshopper jumps in. Built as a **streamlined cross-platform contextual collaboration service**, it uses our on-prem installation of Magnet on Microsoft Azure and Google Cloud Messaging to integrate deep-linked commands and execute real-time applications between all mobile and web platforms located in a cluster near each other. It completely gets rid of the overhead of authenticating users and sharing contacts - all sharing is done locally through gps-enabled devices.
## Use Cases
Grasshopper lets you collaborate locally between friends/colleagues. We account for sharing application information at the deepest contextual level, launching instances with accurately prepopulated information where necessary. As this data is compatible with all third-party applications, the use cases can shoot through the sky. Here are some applications that we accounted for to demonstrate the power of our platform:
1. Share a video within a team through their mobile's native YouTube app **in seconds**.
2. Instantly play said video on a bigger screen by hopping the video over to Chrome on your computer.
3. Share locations on Google maps between nearby mobile devices and computers with a single swipe.
4. Remotely hop important links while surfing your smartphone over to your computer's web browser.
5. Rick Roll your team.
## What's Next?
Sleep.
|
winning
|
## The Problem:
Inspired by our teammates' lack of knowledge in first aid administration, we address a gap in emergency medical response time for remote and underserviced locations. Currently, it takes 7 minutes for emergency medical services (EMS) to reach the scene, with the time in rural areas doubling to 14 minutes. For critical medical emergencies, such as cardiac arrest, every minute the **chances of survival decrease by 10%** making immediate intervention necessary. This is where the most crucial element to save lives comes in: the bystander. In fact, the intervention of bystanders can triple the chances of survival in instances of cardiac arrest.
Here is where another issue arises. Only 6 out of 10 people feel comfortable even attempting to perform CPR for someone in cardiac arrest. This stems from **only 3.5%** of people in the United States being trained in first-aid procedures such as cardiac arrest, and an irrational fear the bystander will inflict further damage to the victim.
## The Solution:
***The report below follows the assumption that in the next 5-10 years VR technology will be readily adopted into everyday life, by everyone. Strides in hardware will make the technology as sleek as a pair of glasses, and as common as a smartphone.***
Our novel VR and AI-based pipeline equips bystanders with the knowledge and visual guidance to perform life-saving procedures with confidence and precision. At Treehacks, we focused on creating guidance for a situation where a bystander witnesses someone seizing, which eventually escalates into cardiac arrest. Using Good Samaritan, the device passively detects a medical emergency and guides the bystander on how to care for the victim until emergency medical services arrive on the scene in order to give the victim the best chances of survival.
We provide in-depth, real-time visuals that detail:
* Real-time data streamed from a Fitbit of the victim to monitor vitals (TerraAPI)
* Facilitate proper “log-roll technique” to move an injured person
* The correct safe orientation for someone having a seizure so they don’t choke on their saliva
* The proper supporting of the head and neck to prevent paralysis
* Instruction for proper CPR with visuals on the victim’s body for where to place hands, what pace to perform compressions at, and when to give mouth-to-mouth breaths
* A live progress bar displaying the time until emergency medical services arrive on the scene
This immersive experience ensures that, without immediate professional help, the affected individuals receive the best possible care, increasing their chances of survival and recovery.
## What we are proud of, and how we built it:
Wow, we created a novel dynamic application that has the potential to save millions of lives in the future.
We programmed computer vision-based models using MediaPipe and OpenCV to perform pose detection and joint detection. We then performed linear transformations in a 3D vector space to identify and anchor the points in the Apple Vision Pro’s virtual space using real-time video from our computer vision script built onto VisionOS.
## Business Model:
$14.6 Billion Market Cap
-Work with public health departments to include the app in rural or underserved areas. The government could fund the deployment of the app as part of their mandate to improve public health infrastructure.
-Work with emergency services to integrate the app into their response protocols, providing first responders with additional information or assisting in situations where they can't reach the scene immediately.
-Apply for government grants aimed at technological innovations that improve public safety and health.
-Educational Programs: Integrate the app into educational programs, such as school safety initiatives or community health workshops, funded or supported by local or national government agencies.
## The team (4 people, 4 schools represented):
Ray- Stanford, specializes in AI and product structure
Shutaro- Columbia, specializes in immersive technologies
Shloak- UCLA, specializes in vision
Yash- Georgia Tech, specializes in medicine
## Next Steps:
This is one tangible application for Good Samaritan, though in the future we plan to have similar guiding procedures for:
-Anaphylactic shock
-Lacerations where bleeding must be managed
-Stroke
-AED’s
-Other emergency medical complications that benefit from the interference of a bystander
## Challenges we ran into:
Our expertise lay in Unity; however, Apple Vision Pro was only accessible with Unity Pro ($2,000), so we pivoted to and learned Swift. We ran into errors while translating CV’s 2D data into a 3D environment; we made use of anchoring techniques to pin the z-dimension while using the xy-dimension from CV.
## What we learned:
A ton! Applying CV’s 2D data into a 3D space, programming VR and AR on the Apple Vision Pro, and using Swift UI to develop VisionOS applications! Our pipeline is built ground up and novel—we figured it out along the way with little documentation to lean on!
|
# **MedKnight**
#### Professional medical care in seconds, when the seconds matter
## Inspiration
Natural disasters often put emergency medical responders (EMTs, paramedics, combat medics, etc.) in positions where they must assume responsibilities beyond the scope of their day-to-day job. Inspired by this reality, we created MedKnight, an AR solution designed to empower first responders. By leveraging cutting-edge computer vision and AR technology, MedKnight bridges the gap in medical expertise, providing first responders with life-saving guidance when every second counts.
## What it does
MedKnight helps first responders perform critical, time-sensitive medical procedures on the scene by offering personalized, step-by-step assistance. The system ensures that even "out-of-scope" operations can be executed with greater confidence. MedKnight also integrates safety protocols to warn users if they deviate from the correct procedure and includes a streamlined dashboard that streams the responder’s field of view (FOV) to offsite medical professionals for additional support and oversight.
## How we built it
We built MedKnight using a combination of AR and AI technologies to create a seamless, real-time assistant:
* **Meta Quest 3**: Provides live video feed from the first responder’s FOV using a Meta SDK within Unity for an integrated environment.
* **OpenAI (GPT models)**: Handles real-time response generation, offering dynamic, contextual assistance throughout procedures.
* **Dall-E**: Generates visual references and instructions to guide first responders through complex tasks.
* **Deepgram**: Enables speech-to-text and text-to-speech conversion, creating an emotional and human-like interaction with the user during critical moments.
* **Fetch.ai**: Manages our system with LLM-based agents, facilitating task automation and improving system performance through iterative feedback.
* **Flask (Python)**: Manages the backend, connecting all systems with a custom-built API.
* **SingleStore**: Powers our database for efficient and scalable data storage.
## SingleStore
We used SingleStore as our database solution for efficient storage and retrieval of critical information. It allowed us to store chat logs between the user and the assistant, as well as performance logs that analyzed the user’s actions and determined whether they were about to deviate from the medical procedure. This data was then used to render the medical dashboard, providing real-time insights, and for internal API logic to ensure smooth interactions within our system.
## Fetch.ai
Fetch.ai provided the framework that powered the agents driving our entire system design. With Fetch.ai, we developed an agent capable of dynamically responding to any situation the user presented. Their technology allowed us to easily integrate robust endpoints and REST APIs for seamless server interaction. One of the most valuable aspects of Fetch.ai was its ability to let us create and test performance-driven agents. We built two types of agents: one that automatically followed the entire procedure and another that responded based on manual input from the user. The flexibility of Fetch.ai’s framework enabled us to continuously refine and improve our agents with ease.
## Deepgram
Deepgram gave us powerful, easy-to-use functionality for both text-to-speech and speech-to-text conversion. Their API was extremely user-friendly, and we were even able to integrate the speech-to-text feature directly into our Unity application. It was a smooth and efficient experience, allowing us to incorporate new, cutting-edge speech technologies that enhanced user interaction and made the process more intuitive.
## Challenges we ran into
One major challenge was the limitation on accessing AR video streams from Meta devices due to privacy restrictions. To work around this, we used an external phone camera attached to the headset to capture the field of view. We also encountered microphone rendering issues, where data could be picked up in sandbox modes but not in the actual Virtual Development Environment, leading us to scale back our Meta integration. Additionally, managing REST API endpoints within Fetch.ai posed difficulties that we overcame through testing, and configuring SingleStore's firewall settings was tricky but eventually resolved. Despite these obstacles, we showcased our solutions as proof of concept.
## Accomplishments that we're proud of
We’re proud of integrating multiple technologies into a cohesive solution that can genuinely assist first responders in life-or-death situations. Our use of cutting-edge AR, AI, and speech technologies allows MedKnight to provide real-time support while maintaining accuracy and safety. Successfully creating a prototype despite the hardware and API challenges was a significant achievement for the team, and was a grind till the last minute. We are also proud of developing an AR product as our team has never worked with AR/VR.
## What we learned
Throughout this project, we learned how to efficiently combine multiple AI and AR technologies into a single, scalable solution. We also gained valuable insights into handling privacy restrictions and hardware limitations. Additionally, we learned about the importance of testing and refining agent-based systems using Fetch.ai to create robust and responsive automation. Our greatest learning take away however was how to manage such a robust backend with a lot of internal API calls.
## What's next for MedKnight
Our next step is to expand MedKnight’s VR environment to include detailed 3D renderings of procedures, allowing users to actively visualize each step. We also plan to extend MedKnight’s capabilities to cover more medical applications and eventually explore other domains, such as cooking or automotive repair, where real-time procedural guidance can be similarly impactful.
|
## Inspiration
Last year we did a project with our university looking to optimize the implementation of renewable energy sources for residential homes. Specifically, we determined the best designs for home turbines given different environments. In this project, we decided to take this idea of optimizing the implementation of home power further.
## What it does
A web application allows users to enter an address and determine if installing a backyard wind turbine or solar panel is more profitable/productive for their location.
## How we built it
Using an HTML front-end we send the user's address to a python flask back end where we use a combination of external APIs, web scraping, researched equations, and our own logic and math to predict how the selected piece of technology will perform.
## Challenges we ran into
We were hoping to use Google's Earth Engine to gather climate data, but were never approved fro the $25 credit so we had to find alternatives. There aren't alot of good options to gather the nessesary solar and wind data, so we had to use a combination of API's and web scraping to gather the required data which ended up being a bit more convulted than we hoped. Also integrating the back-end with the front-end was very difficult because we don't have much experience with full-stack development working end to end.
## Accomplishments that we're proud of
We spent a lot of time coming up with idea for EcoEnergy and we really think it has potential. Home renewable energy sources are quite an investment, so having a tool like this really highlights the benefits and should incentivize people to buy them. We also think it's a great way to try to popularize at-home wind turbine systems by directly comparing them to the output of a solar panel because depending on the location it can be a better investment.
## What we learned
During this project we learned how to predict the power output of solar panels and wind turbines based on windspeed and sunlight duration. We learned how to combine a back-end built in python to a front-end built in HTML using flask. We learned even more random stuff about optimizing wind turbine placement so we could recommend different turbines depending on location.
## What's next for EcoEnergy
The next step for EcoEnergy would be to improve the integration between the front and back end. As well as find ways to gather more location based climate data which would allow EcoEnergy to predict power generation with greater accuracy.
|
partial
|
## Inspiration
The classroom experience has *drastically* changed over the years. Today, most students and professors prefer to conduct their course organization and lecture notes electronically. Although there are applications that enabled a connected classroom, none of them are centered around measuring students' understanding during lectures.
The inspiration behind Enrich was driven by the need to create a user-friendly platform that expands the possibilities of electronic in-class course lectures: for both the students and the professors. We wanted to create a way for professors to better understand the student's viewpoint, recognize when their students need further help with a concept, and lead a lecture that would best provide value to students.
## What it does
Enrich is an interactive course organization platform. The essential idea of the app is that professor can create "classrooms" to which students can add themselves using a unique key provided by the professor. The professor has the ability to create multiple such classrooms for any class that he/she teaches. For each classroom, we provide a wide suite of services to enable a productive lecture.
An important feature in our app is a "learning ratio" statistic, which lets the professor know how well he/she is teaching the topics. As the teacher is going through the material, students can anonymously give real-time feedback on how they are responding to the lecture.The aggregation of this data is used to determine a color gradient from red (the lecture is going poorly) to green (the lecture is very clear and understandable). This allows the teacher to slow down if she recognizes that students are getting lost.
We also have a speech-to-text translation service that transcribes the lecture as it is going, providing students with the ability to read what the teacher is saying. This not only provides accessibility to those who can't hear, but also allows students to go back over what the teacher has said in the lecture.
Lastly, we have a messaging service that connects the students to Teaching Assistants during the lecture. This allows them to ask questions to clarify their understanding without disrupting the class.
## How we built it
Our platform consists of two sides to it: Learners and Educators. We used React.js as the front-end for both the Learner-side and Educator-side of our application. The whole project revolves around a effectively organized Firebase RealTime database, which stores the hierarchy of professor-class-student relationships. The React Components interface with Firebase to update students as and when they enter and leave a classroom. We also used Pusher to develop the chat service on the classrooms.
For the speech-to-text detection, we used the Google Speech-to-Text API to detect speech from the Educator's computer, transcribe this, and update the Firebase RealTime database with the transcript. The web application then updates user-facing site with the transcript.
## Challenges we ran into
The database structure on Firebase is quite intricate
Figuring out the best design for the Firebase database was challenging, because we wanted a seamless way to structure classes, students, their responses, and recordings. The speech-to-text transcription was also very challenging. We worked through using various APIs for the service, before finally settling on the Google Speech-to-Text API. Once we got the transcription service to work, it was hard to integrate it into the web application.
## Accomplishments that we're proud of
We were proud of getting the speech-to-text transcription service to work, as it took a while to connect to the API, get the transcription, and then transfer that over to our web application.
## What we learned
Despite using React for previous projects, we utilized new ways of state management through Redux that made things much simpler than before. We have also learned to integrate different services within our React application, such as the Chatbox in our application.
## What's next for Enrich - an education platform to increase collaboration
The great thing about Enrich is that it has a massive scope to expand! We had so many ideas to implement, but only such little time. We could have added a camera that tracks the expressions of students to analyze how they are reacting to lectures. This would have been a hands-off approach to getting feedback. We could also have added a progress bar for how far the lecture is going, a screen-sharing capability, and interactive whiteboard.
|
## Inspiration
The inspiration for this project stems from the well-established effectiveness of focusing on one task at a time, as opposed to multitasking. In today's era of online learning, students often find themselves navigating through various sources like lectures, articles, and notes, all while striving to absorb information effectively. Juggling these resources can lead to inefficiency, reduced retention, and increased distraction. To address this challenge, our platform consolidates these diverse learning materials into one accessible space.
## What it does
A seamless learning experience where you can upload and read PDFs while having instant access to a chatbot for quick clarifications, a built-in YouTube player for supplementary explanations, and a Google Search integration for in-depth research, all in one platform. But that's not all - with a click of a button, effortlessly create and sync notes to your Notion account for organized, accessible study materials. It's designed to be the ultimate tool for efficient, personalized learning.
## How we built it
Our project is a culmination of diverse programming languages and frameworks. We employed HTML, CSS, and JavaScript for the frontend, while leveraging Node.js for the backend. Python played a pivotal role in extracting data from PDFs. In addition, we integrated APIs from Google, YouTube, Notion, and ChatGPT, weaving together a dynamic and comprehensive learning platform
## Challenges we ran into
None of us were experienced in frontend frameworks. It took a lot of time to align various divs and also struggled working with data (from fetching it to using it to display on the frontend). Also our 4th teammate couldn't be present so we were left with the challenge of working as a 3 person team.
## Accomplishments that we're proud of
We take immense pride in not only completing this project, but also in realizing the results we envisioned from the outset. Despite limited frontend experience, we've managed to create a user-friendly interface that integrates all features successfully.
## What we learned
We gained valuable experience in full-stack web app development, along with honing our skills in collaborative teamwork. We learnt a lot about using APIs. Also a lot of prompt engineering was required to get the desired output from the chatgpt apu
## What's next for Study Flash
In the future, we envision expanding our platform by incorporating additional supplementary resources, with a laser focus on a specific subject matter
|
## Inspiration
Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades.
## What it does
Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own.
An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling.
Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate.
## How we built it
* **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details.
* **Frontend:** We used React to create the application and Socket.IO to connect it to the backend.
* **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com.
## Challenges we ran into
Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced.
## Accomplishments that we're proud of
We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project.
## What we learned
This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React.
## What's next for Lecturely
This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features.
Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
|
losing
|
# Inspiration
Meet one of our teammates, Lainey! Over the past three years, she has spent over 2,000 hours volunteering with youth who attend under-resourced schools in Washington state. During the sudden onset of the pandemic, the rapid school closures ended the state’s Free and Reduced Lunch program for thousands of children across the state, pushing the burden of purchasing healthy foods onto parents. It became apparent that many families she worked with heavily relied on government-provided benefits such as SNAP (Supplemental Nutrition Assistance Program) to purchase the bare necessities. Research shows that SNAP is associated with alleviating food insecurity. Receiving SNAP in early life can lead to improved outcomes in adulthood. Low-income families under SNAP are provided with an EBT (Electronic Benefit Transfer) card and are able to load a monthly balance and use it like a debit card to purchase food and other daily essentials.
However, the EBT system still has its limitations: to qualify to accept food stamps, stores must sell food in each of the staple food categories. Oftentimes, the only stores that possess the quantities of scale to achieve this are a small set of large chain grocery stores, which lack diverse healthy food options in favor of highly-processed goods. Not only does this hurt consumers with limited healthy options, it also prevents small, local producers from selling their ethically and sustainably sourced produce to those most in need. Studies have repeatedly shown a direct link between sustainable food production and food health quality.
The primary grocery sellers who have the means and scale to qualify to accept food stamps are large chain grocery stores, which often have varying qualities of produce (correlated with income in that area) that pale compared to output from smaller farms. Additionally, grocery stores often supplement their fresh food options with a large selection of cheaper, highly-processed items that are high in sodium, cholesterol, and sugar. On average, unhealthy foods are about $1.50 cheaper per day than healthy foods, making it both less expensive and less effort to choose those options. Studies have shown that lower income individuals “consume fewer fruits and vegetables, more sugar-sweetened beverages, and have lower overall diet quality”. This leads to deteriorated health, inadequate nutrition, and elevated risk for disease. In addition, groceries stores with healthier, higher quality products are often concentrated in wealthy areas and target a higher income group, making distance another barrier to entry when it comes to getting better quality foods.
Meanwhile, small, local farmers and stores are unable to accept food stamp payments. Along with being higher quality and supporting the community, buying local foods are also better for the environment. Local foods travel a shorter distance, and the structure of events like farmers markets takes away a customer’s dependency on harmful monocrop farming techniques. However, these benefits come with their own barriers as well. While farmers markets accept SNAP benefits, they (and similar events) aren’t as widespread: there are only 8600 markets registered in the USDA directory, compared to the over 62,000 grocery stores that exist in the USA. And the higher quality foods have their own reputation of higher prices.
Locl works to alleviate these challenges, offering a platform that supports EBT card purchases to allow SNAP benefit users to purchase healthy food options from local markets.
# What does Locl do?
Locl works to bridge the gap between EBT cardholders and fresh homegrown produce. Namely, it offers a platform where multiple local producers can list their produce online for shoppers to purchase with their EBT card. This provides a convenient and accessible way for EBT cardholders to access healthy meals, while also promoting better eating habits and supporting local markets and farmers. It works like a virtual farmers market, combining the quality of small farms with the ease and reach of online shopping. It makes it easier for a consumer to buy better quality foods with their EBT card, while also allowing a greater range of farms and businesses to accept these benefits. This provides a convenient and accessible way for EBT cardholders to access healthy meals, while also promoting better eating habits and supporting local markets and farmers.
When designing our product, some of our top concerns were the technological barrier of entry for consumers and ensuring an ethical and sustainable approach to listing produce online. To use Locl, users are required to have an electronic device and internet connection, ultimately limiting access within our target audience. Beyond this, we recognized that certain produce items or markets could be displayed disproportionally in comparison to others, which could create imbalances and inequities between all the stakeholders involved. We aim to address this issue by crafting a refined algorithm that balances the search appearance frequency from a certain product based on how many similar products like such are posted.
# Key Features
## EBT Support
Shoppers can convert their EBT balance into Locl credits. From there, they can spend their credits buying produce from our set of carefully curated suppliers. To prevent fraud, each vendor is carefully evaluated to ensure they sell ethically sourced produce. Thus, shoppers can only spend their Locl credits on produce, adhering to government regulation on SNAP benefits.
## Bank-less payment
Because low-income shoppers may not have access to a bank account, we've used Checkbook.io's virtual credit cards and direct deposit to facilitate payments between shoppers and vendors.
## Producer accessibility
By listing multiple vendors on one platform, Locl is able to circumvent the initial problems of scale. Rather than each vendor being its own store, we consolidate them all into one large store, thereby increasing accessibility for consumers to purchase products from smaller vendors.
## Recognizable marketplace
To improve the ease of use, Locl's interface is carefully crafted to emulate other popular marketplace applications such as Facebook Marketplace and Craigslist. Because shoppers will already be accustomed to our app, it'll far improve the overall user experience.
# How we built it
Locl revolves around a web app interface to allow shoppers and vendors to buy and sell produce.
## Flask
The crux of Locl centers on our Flask server. From there, we use requests and render\_templates() to populate our website with GET and POST requests.
## Supabase
We use Supabase and PostgreSQL to store our product, market, virtual credit card, and user information. Because Flask is a Python library, we use Supabase's community managed Python library to insert and update data.
## Checkbook.io
We use Checkbook.io's Payfac API to create transactions between shoppers and vendors. When people create an account on Locl, they are automatically added as a user in Checkbook with the `POST /v3/user` endpoint. Meanwhile, to onboard both local farmers and shoppers painlessly, we offer a bankless solution with Checkbook’s virtual credit card using the `POST /v3/account/vcc` endpoint.
First, shoppers deposit credits into their Locl account from the EBT card. The EBT funds are later redeemed with the state government by Locl. Whenever a user buys an item, we use the `POST /v3/check/digital` endpoint to create a transaction between them and the stores to pay for the goods. From there, vendors can also spend their funds as if it were a prepaid debit card. By using Checkbook’s API, we’re able to break down the financial barrier of having a bank account for low-income shoppers to buy fresh produce from local suppliers, when they otherwise wouldn’t have been able to.
# Challenges we encountered
Because we were all new to using these APIs, we were initially unclear about what actions they could support. For example, we wanted to use You.com API to build our marketplace. However, it soon became apparent that we couldn't embed their API into our static HTML page as we'd assume. Thus, we had to pivot to creating our own cards with Jinja.
# Looking forward
In the future, we hope to advance our API services to provide a wider breadth of services which would include more than just produce from local farmers markets. Given a longer timeframe, a few features we'd like to implement include:
* a search and filtering system to show shoppers their preferred goods.
* an automated redemption system with the state government for EBT.
* improved security and encryption for all API calls and database queries.
# Ethics
SNAP (Supplemental Nutrition Assistance Program), otherwise known as food stamps, is a government program that aids low-income families and individuals to purchase food. The inaccessibility of healthy foods is a pressing problem because there is a small number of grocery stores that accept food stamps, which are often limited to large, chain grocery stores that are not always accessible. Beyond this, these grocery stores often lack healthy food options in favor of highly-processed goods.
When doing further research into this issue, we were fortunate to have a team member who has knowledge about SNAP benefits through firsthand experience in classroom settings and at food banks. Through this, we learned about EBT (Electronic Benefit Transfer) cards, as well as their limitations. The only stores that can support EBT payments must offer a selection for each of the staple food categories, which prevents local markets and farmers from accepting food stamps as payment.
To tackle this issue of the limited accessibility of healthy foods for SNAP benefit users, we came up with Locl, an online platform that allows local markets and farmers to list fresh produce for EBT cardholders to purchase with food stamps. When creating Locl, we adhered to our goal of connecting food stamp users with healthy, ethically sourced foods in a sustainable manner. However, there are still many ethical challenges that must be explored further.
First, to use Locl, users would require a portable electronic device and an internet connection due to it being an online platform. The Pew Research center states that 29% of adults with incomes below $30,000/year do not have access to a smartphone and 44% do not have portable internet access. This would greatly lessen the range of individuals that we aim to serve.
Second, though Locl aims to serve SNAP beneficiaries, we also hope to aid local markets and farmers by increasing the number of potential customers. However, Locl runs the risk of displaying certain produce items or marketplaces disproportionately in comparison to others, which could create imbalances and inequities between all stakeholders involved. Furthermore, this display imbalance could limit user knowledge about certain marketplaces.
Third, Locl aims to increase ethical consumerism by connecting its users with sustainable markets and farmers. However, there arises the issue of selecting which markets and farmers to support on our platform. While considering baselines that we would expect marketplaces to meet to be displayed on Locl, we recognized that sustainability can be measured through a wide number of factors- labor, resources used, pollution levels, and began wondering whether we prioritize sustainability of items we market or the health of users. One example of this is meat, a popular food product which is known for its high health benefits, but similarly high water consumption and greenhouse gas levels. Narrowing these down could greatly limit the display of certain products.
Fourth, Locl does not have an option for users to filter the results that are displayed to them. Many EBT cardholders say that they do not use their benefits to make online purchases due to the difficulty of finding items on online store pages that qualify for their benefits as well as their dietary needs. Thus, our lack of a filter option would cause certain users to have increased difficulty in finding food options for themselves.
Our next step for Locl is to address the ethical concerns above, as well as explore ways to make it more accessible and well-known. However, there are still many components to consider from a sociotechnical lens. Currently, only 4% of SNAP beneficiaries make online purchases with their EBT cards. This small percentage may stem from reasons that range from lack of internet access, to not being aware that online options are available. We hope that with Locl, food stamp users will have increased access to healthy food options and local markets and farmers will have an increased customer-base.
# References
<https://ajph.aphapublications.org/doi/full/10.2105/AJPH.2019.305325>
<https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-6546-2>
<https://www.ibisworld.com/industry-statistics/number-of-businesses/supermarkets-grocery-stores-united-states/> <https://www.masslive.com/food/2022/01/these-are-the-top-10-unhealthiest-grocery-items-you-can-buy-in-the-united-states-according-to-moneywise.html>
<https://farmersmarketcoalition.org/education/qanda/>
<https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-6546-2>
<https://news.climate.columbia.edu/2019/08/09/farmers-market-week-2019/>
|
## Inspiration
At reFresh, we are a group of students looking to revolutionize the way we cook and use our ingredients so they don't go to waste. Today, America faces a problem of food waste. Waste of food contributes to the acceleration of global warming as more produce is needed to maintain the same levels of demand. In a startling report from the Atlantic, "the average value of discarded produce is nearly $1,600 annually" for an American family of four. In terms of Double-Doubles from In-n-Out, that goes to around 400 burgers. At reFresh, we believe that this level of waste is unacceptable in our modern society, imagine every family in America throwing away 400 perfectly fine burgers. Therefore we hope that our product can help reduce food waste and help the environment.
## What It Does
reFresh offers users the ability to input ingredients they have lying around and to find the corresponding recipes that use those ingredients making sure nothing goes to waste! Then, from the ingredients left over of a recipe that we suggested to you, more recipes utilizing those same ingredients are then suggested to you so you get the most usage possible. Users have the ability to build weekly meal plans from our recipes and we also offer a way to search for specific recipes. Finally, we provide an easy way to view how much of an ingredient you need and the cost of those ingredients.
## How We Built It
To make our idea come to life, we utilized the Flask framework to create our web application that allows users to use our application easily and smoothly. In addition, we utilized a Walmart Store API to retrieve various ingredient information such as prices, and a Spoonacular API to retrieve recipe information such as ingredients needed. All the data is then backed by SQLAlchemy to store ingredient, recipe, and meal data.
## Challenges We Ran Into
Throughout the process, we ran into various challenges that helped us grow as a team. In a broad sense, some of us struggled with learning a new framework in such a short period of time and using that framework to build something. We also had issues with communication and ensuring that the features we wanted implemented were made clear. There were times that we implemented things that could have been better done if we had better communication. In terms of technical challenges, it definitely proved to be a challenge to parse product information from Walmart, to use the SQLAlchemy database to store various product information, and to utilize Flask's framework to continuously update the database every time we added a new recipe.
However, these challenges definitely taught us a lot of things, ranging from a better understanding to programming languages, to learning how to work and communicate better in a team.
## Accomplishments That We're Proud Of
Together, we are definitely proud of what we have created. Highlights of this project include the implementation of a SQLAlchemy database, a pleasing and easy to look at splash page complete with an infographic, and being able to manipulate two different APIs to feed of off each other and provide users with a new experience.
## What We Learned
This was all of our first hackathon, and needless to say, we learned a lot. As we tested our physical and mental limits, we familiarized ourselves with web development, became more comfortable with stitching together multiple platforms to create a product, and gained a better understanding of what it means to collaborate and communicate effectively in a team. Members of our team gained more knowledge in databases, UI/UX work, and popular frameworks like Boostrap and Flask. We also definitely learned the value of concise communication.
## What's Next for reFresh
There are a number of features that we would like to implement going forward. Possible avenues of improvement would include:
* User accounts to allow ingredients and plans to be saved and shared
* Improvement in our search to fetch more mainstream and relevant recipes
* Simplification of ingredient selection page by combining ingredients and meals in one centralized page
|
## Inspiration
We saw people struggling to open the door to the hacking room.
## What it does
We used a chair to prop open the door.
## How I built it
We put a chair behind the door to keep it from closing.
## Challenges I ran into
Once, I tried to jump over the chair to enter the room and hit my head on the doorframe.
## Accomplishments that I'm proud of
Used by the 10-20+ hackers in our hacking room. Integration with existing room-entry systems was seamless.
## What I learned
The unexamined life is not worth living.
## What's next for Door Hack
We hope to open more doors for others in the future. Looking to pitch to potential investors soon.
|
partial
|
## Inspiration
A brief recap of the inspiration for Presentalk 1.0: We wanted to make it easier to navigate presentations. Handheld clickers are useful for going to the next and last slide, but they are unable to skip to specific slides in the presentation. Also, we wanted to make it easier to pull up additional information like maps, charts, and pictures during a presentation without breaking the visual continuity of the presentation. To do that, we added the ability to search for and pull up images using voice commands, without leaving the presentation.
Last year, we finished our prototype, but it was a very hacky and unclean implementation of Presentalk. After the positive feedback we heard after the event, despite our code's problems, we resolved to come back this year to make the product something we could actually host online and let everyone use.
## What it does
Presentalk solves this problem with voice commands that allow you to move forward and back, skip to specific slides and keywords, and go to specific images in your presentation using image recognition. Presentalk recognizes voice commands, including:
* Next Slide
+ Goes to the next slide
* Last Slide
+ Goes to the previous slide
* Go to Slide 3
+ Goes to the 3rd slide
* Go to the slide with the dog
+ Uses google cloud vision to parse each slide's images, and will take you to the slide it thinks has a dog in it.
* Go to the slide titled APIs
+ Goes to the first slide with APIs in its title
* Search for "voice recognition"
+ Parses the text of each slide for a matching phrase and goes to that slide.
* Show me a picture of UC Berkeley
+ Uses Bing image search to find the first image result of UC Berkeley
* Zoom in on the Graph
+ Uses Google Cloud Vision to identify an object, and if it matches the query, zooms in on the object.
* Tell me the product of 857 and 458
+ Uses Wolfram Alpha's Short Answer API to answer computation and knowledge based questions
Video: <https://vimeo.com/chanan/calhacks3>
## How we built it
* Built a backend in python that linked to our voice recognition, which we built all of our other features off of.
## Challenges we ran into
* Accepting microphone input through Google Chrome (people can have different security settings)
* Refactor entire messy, undocumented codebase from last year
## Accomplishments that we're proud of
Getting Presentalk from weekend pet project to something that could actually scale with many users on a server in yet another weekend.
## What we learned
* Sometimes the best APIs are hidden right under your nose. (Web Speech API was released in 2013 and we didn't use it last year. It's awesome!)
* Re-factoring code you don't really remember is difficult.
## What's next for Presentalk
Release to the general public! (Hopefully)
|
## Inspiration
It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened?
## What it does
Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text.
## How we built it
Communications: WebRTC, WebSockets, HTTPS
We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information.
For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition.
Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization)
## Challenges we ran into
There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience.
## Accomplishments that we're proud of
Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs!
## What we learned
For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend.
## What's next for Rewind
We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
|
## Inspiration
Ever found yourself struggling to keep up during a lecture, caught between listening to the professor while scrambling to scribble down notes? It’s all too common to miss key points while juggling the demands of note-taking – that’s why we made a tool designed to do the hard work for you!
## What it does
With a simple click, you can start recording of your lecture, and NoteHacks will start generating clear, summarized notes in real time. The summary conciseness parameter can be fine tuned depending on how you want your notes written, and will take note of when it looks like you've been distracted so that you can have all those details you would have missed. These notes are stored for future review, where you can directly ask AI about the content without having to provide background details.
## How we built it
* Backend + database using Convex
* Frontend using Next.js
* Image, speech, and text models by Groq
## Challenges we ran into
* Chunking audio to stream and process it in real-time
* Summarizing a good portion of the text, without it being weirdly chopped off and losing context
* Merge conflicts T-T
* Windows can't open 2 cameras simultaneously
## Accomplishments that we're proud of
* Real-time speech processing that displays on the UI
* Gesture recognition
## What we learned
* Real-time streaming audio and video
* Convex & Groq APIs
* Image recognition
## What's next for NoteHacks
* Support capturing images and adding them to the notes
* Allow for text editing within the app (text formatting, adding/removing text, highlighting)
|
winning
|
## Inspiration
After witnessing the power of collectible games and card systems, our team was determined to prove that this enjoyable and unique game mechanism wasn't just some niche and could be applied to a social activity game that anyone could enjoy or use to better understand one another (taking a note from Cards Against Humanity's book).
## What it does
Words With Strangers pairs users up with a friend or stranger and gives each user a queue of words that they must make their opponent say without saying this word themselves. The first person to finish their queue wins the game. Players can then purchase collectible new words to build their deck and trade or give words to other friends or users they have given their code to.
## How we built it
Words With Strangers was built on Node.js with core HTML and CSS styling as well as usage of some bootstrap framework functionalities. It is deployed on Heroku and also makes use of TODAQ's TaaS service API to maintain the integrity of transactions as well as the unique rareness and collectibility of words and assets.
## Challenges we ran into
The main area of difficulty was incorporating TODAQ TaaS into our application since it was a new service that none of us had any experience with. In fact, it isn't blockchain, etc, but none of us had ever even touched application purchases before. Furthermore, creating a user-friendly UI that was fully functional with all our target functionalities was also a large issue and challenge that we tackled.
## Accomplishments that we're proud of
Our UI not only has all our desired features, but it also is user-friendly and stylish (comparable with Cards Against Humanity and other genre items), and we were able to add multiple word packages that users can buy and trade/transfer.
## What we learned
Through this project, we learned a great deal about the background of purchase transactions on applications. More importantly, though, we gained knowledge on the importance of what TODAQ does and were able to grasp knowledge on what it truly means to have an asset or application online that is utterly unique and one of a kind; passable without infinite duplicity.
## What's next for Words With Strangers
We would like to enhance the UI for WwS to look even more user friendly and be stylish enough for a successful deployment online and in app stores. We want to continue to program packages for it using TODAQ and use dynamic programming principles moving forward to simplify our process.
|
## Inspiration
Reflecting on 2020, we were challenged with a lot of new experiences, such as online school. Hearing a lot of stories from our friends, as well as our own experiences, doing everything from home can be very distracting. Looking at a computer screen for such a long period of time can be difficult for many as well, and ultimately it's hard to maintain a consistent level of motivation. We wanted to create an application that helped to increase productivity through incentives.
## What it does
Our project is a functional to-do list application that also serves as a 5v5 multiplayer game. Players create a todo list of their own, and each completed task grants "todo points" that they can allocate towards their attributes (physical attack, physical defense, special attack, special defense, speed). However, tasks that are not completed serve as a punishment by reducing todo points.
Once everyone is ready, the team of 5 will be matched up against another team of 5 with a preview of everyone's stats. Clicking "Start Game" will run the stats through our algorithm that will determine a winner based on whichever team does more damage as a whole. While the game is extremely simple, it is effective in that players aren't distracted by the game itself because they would only need to spend a few minutes on the application. Furthermore, a team-based situation also provides incentive as you don't want to be the "slacker".
## How we built it
We used the Django framework, as it is our second time using it and we wanted to gain some additional practice. Therefore, the languages we used were Python for the backend, HTML and CSS for the frontend, as well as some SCSS.
## Challenges we ran into
As we all worked on different parts of the app, it was a challenge linking everything together. We also wanted to add many things to the game, such as additional in-game rewards, but unfortunately didn't have enough time to implement those.
## Accomplishments that we're proud of
As it is only our second hackathon, we're proud that we could create something fully functioning that connects many different parts together. We spent a good amount of time on the UI as well, so we're pretty proud of that. Finally, creating a game is something that was all outside of our comfort zone, so while our game is extremely simple, we're glad to see that it works.
## What we learned
We learned that game design is hard. It's hard to create an algorithm that is truly balanced (there's probably a way to figure out in our game which stat is by far the best to invest in), and we had doubts about how our application would do if we actually released it, if people would be inclined to play it or not.
## What's next for Battle To-Do
Firstly, we would look to create the registration functionality, so that player data can be generated. After that, we would look at improving the overall styling of the application. Finally, we would revisit game design - looking at how to improve the algorithm to make it more balanced, adding in-game rewards for more incentive for players to play, and looking at ways to add complexity. For example, we would look at implementing a feature where tasks that are not completed within a certain time frame leads to a reduction of todo points.
|
## Inspiration
Our inspiration is from the challenges individuals face with motivation and accountability, especially when living alone or lacking a supportive community. We wanted to build an environment where task completion is not just a solitary endeavor, but a shared experience. By integrating social elements - the sharing of photos and the visibility of daily routines - mundane activities transform into engaging and collective milestones. Our platform is not just about ticking boxes; it's about connecting lives and building a community that cheers on each other's successes.
Our app offers a casual yet impactful way to stay productive. We aim to cultivate a fun, vibrant community where each user not only finds the motivation to complete their tasks but also discovers the joy in everyday achievements.
## What it does
Our web application is a community platform which is a unique blend of a to-do/habit tracker. Users can add their daily tasks and habits, and upon completion, upload a photo as proof. It’s a space where each completed task is more than just a check on a list as users can choose to share their progress with their friends.
## How we built it
The front-end was developed with React, providing a dynamic and responsive user interface. For the back-end, we used Node.js and Express for efficient server management. The database was handled with MongoDB for its flexibility with large amounts of data and user interactions.
## Challenges we ran into
One of the major challenges was implementing the technologies used in the application as none of the team members had prior experience building a web application. Additionally, creating an engaging user interface that encourages daily interaction was a task that required multiple design iterations.
## Accomplishments that we're proud of
We are especially proud of creating an application that incorporates user authentication. This feature will ensure a secure and personalized experience for each user.
## What we learned
Throughout the development process, we learned a lot about user experience design, particularly in the context of community-based platforms. Also, we gained a lot of experience working with various development technologies and storing a shared repository.
## What's next for TodoTogether
We plan to enhance the application by integrating it with other productivity tools. Although the web application allows for greater ease-of-use, ideally, we would want to convert it to a mobile application for accessibility.
|
partial
|
## Inspiration
This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures.
## What it does
Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics.
Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database.
## How we built it
We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack.
## Challenges we ran into
Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected.
Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code.
## Accomplishments that we're proud of
Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges.
## What we learned
We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run.
## What's next for Supermaritan
In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future.
|
## Inspiration
Nowadays, we have been using **all** sorts of development tools for web development, from the simplest of HTML, to all sorts of high-level libraries, such as Bootstrap and React. However, what if we turned back time, and relived the *nostalgic*, good old times of programming in the 60s? A world where the programming language BASIC was prevalent. A world where coding on paper and on **office memo pads** were so popular. It is time, for you all to re-experience the programming of the **past**.
## What it does
It's a programming language compiler and runtime for the BASIC programming language. It allows users to write interactive programs for the web with the simple syntax and features of the BASIC language. Users can read our sample the BASIC code to understand what's happening, and write their own programs to deploy on the web. We're transforming code from paper to the internet.
## How we built it
The major part of the code is written in TypeScript, which includes the parser, compiler, and runtime, designed by us from scratch. After we parse and resolve the code, we generate an intermediate representation. This abstract syntax tree is parsed by the runtime library, which generates HTML code.
Using GitHub actions and GitHub Pages, we are able to implement a CI/CD pipeline to deploy the webpage, which is **entirely** written in BASIC! We also have GitHub Dependabot scanning for npm vulnerabilities.
We use Webpack to bundle code into one HTML file for easy deployment.
## Challenges we ran into
Creating a compiler from scratch within the 36-hour time frame was no easy feat, as most of us did not have prior experience in compiler concepts or building a compiler. Constructing and deciding on the syntactical features was quite confusing since BASIC was such a foreign language to all of us. Parsing the string took us the longest time due to the tedious procedure in processing strings and tokens, as well as understanding recursive descent parsing. Last but **definitely not least**, building the runtime library and constructing code samples caused us issues as minor errors can be difficult to detect.
## Accomplishments that we're proud of
We are very proud to have successfully "summoned" the **nostalgic** old times of programming and deployed all the syntactical features that we desired to create interactive features using just the BASIC language. We are delighted to come up with this innovative idea to fit with the theme **nostalgia**, and to retell the tales of programming.
## What we learned
We learned the basics of making a compiler and what is actually happening underneath the hood while compiling our code, through the *painstaking* process of writing compiler code and manually writing code samples as if we were the compiler.
## What's next for BASIC Web
This project can be integrated with a lot of modern features that is popular today. One of future directions can be to merge this project with generative AI, where we can feed the AI models with some of the syntactical features of the BASIC language and it will output code that is translated from the modern programming languages. Moreover, this can be a revamp of Bootstrap and React in creating interactive and eye-catching web pages.
|
*Inspiration*
One of the most important roles in our current society is the one taken by the varying first responders who ensure the safety of the public through many different means. Innovation which could help these first responders would always be favourable for society as it would follow from this innovation that more lives are saved through a more efficient approach by first responders to save lives.
*What it does*
The Watchdog app is a map which allows registered users to share locations of events which would be important to first responders. These events could have taken place at any time, as the purpose for them varies for the different first responders. For example, if many people report fires then from looking at the map, regardless of when these fires took place, firefighters can locate building complexes which might be prone to fire for some particular reason. It may be that firefighters can find where to pay more attention to as there would be a higher probability for these locations to have fires statistically. This app does not only help firefighters with these statistics, but also the police and paramedics. With reporting of petty crimes such as theft, police can find neighbourhoods where there is a statistical accumulation and focus resources there to improve efficiency. The same would go for paramedics for varying types of accidents which could occur from dangerous jobs such as construction, and paramedics would be more prepared for certain locations. Major cities have many delays due to accidents or other hindrances to travel and these delays are usually unavoidable and a nuisance to city travelers, so the app could also help typical citizens.
*How we built it*
The app was built using MongoDB, Express, and Node on the backend to manage the uploading of all reports added to the MongoDB database. React was used on the front end along with Google Cloud to generate the map using Google Maps API which the user can interact with, by adding their reports and viewing other reports.
*Challenges we ran into*
Our challenges mostly involved working with Google Maps API as doing so in React was new for all of us. Issues arose when trying to make the map interactable and using the map's features to add locations to the database, as we never worked with the map like this before. However, these challenges were overcome by learning the Google Maps documentation as well as we could, and ensuring that the features we wanted were added even if they were still simple.
*Accomplishments that we're proud of*
We're mostly proud of coming up with an idea that we believe could have a strong impact in the world when it comes to helping lives and being efficient with the limited time that first responders have. Technically, accomplishments with being able to make the map interactive despite limited experience with Google Maps API was something that we're proud of as well.
*What we learned*
We learned how to work with React and Google Maps API together, along with how to move data from interactive maps like that to an online database in MongoDB.
*What's next for Watchdog*
Watchdog can add features when it comes to creating reports. Features could vary, such as pictures of the incident or whether first responders were successful in preventing these incidents. The app is already published online so it can be used by people, and a main goal would be to make a mobile version so that more people could use it, even though it can be used by people right now.
|
winning
|
## Inspiration
As university computer engineering coders, we consistently face significant pressure. Our project serves as a constructive outlet during leisure time, providing an opportunity to alleviate stress when life becomes challenging, allowing users to engage in smashing on a wooden board.
## What it does
Our hardware hackathon project, part of the "Make U of T" initiative, is centered around the theme of uselessness. The project comprises a model that permits users to partake in a smashing activity, resulting in the acquisition of a symbolic "bgrad" (breakgrade). In addition to this feature, we have seamlessly integrated a leaderboard accessible on a website, introducing a competitive element to the overall experience. Moreover, we enhance user interaction by incorporating an audio detector that dynamically adjusts the spinning speed based on the user's volume. This utilization of sound technology contributes to a more controlled and satisfying smashing experience.
## How we built it
Our innovative hardware hackathon project, developed as part of the "Make U of T" initiative, seamlessly integrates Arduino Uno and Due microcontrollers, a step motor, 3D printing, a driver, buttons, HTML, CSS, and JavaScript to deliver a captivating user experience. The project features a dynamic spinning mechanism controlled by the step motor, with 3D-printed components using solidwork ensuring precision and customization. User interaction is enhanced through tactile buttons, and an LCD display facilitates input of names and real-time score tracking. Leveraging web technologies such as HTML, CSS, and JavaScript, we created an intuitive leaderboard accessible on a website. The real-time communication between hardware and web components is achieved through Socket.io, powered by a Node.js backend, ensuring a synchronized and immersive experience for participants engaging in the smashing activity. This multifaceted approach combines precision engineering with cutting-edge software development to deliver a unique and engaging project.
## Challenges we ran into
Challenges we ran into during the development of our project included difficulties in achieving faster spin speeds for the step motor, crucial for implementing our desired audio-controlled slowing mechanism. After encountering initial challenges, we successfully addressed this issue by integrating battery holders to enhance the driving power, enabling a quicker pulse and ultimately facilitating the dynamic control of the spinning speed. This solution not only resolved our technical challenge but also contributed to an optimized and responsive user experience, aligning with our project goals.
## Accomplishments that we're proud of
Our team takes pride in successfully overcoming challenges and achieving a seamless integration of hardware and software components for the Hulk Smash project. Addressing the initial hurdle of controlling the step motor's speed, we implemented an innovative solution by incorporating battery holders to enhance driving power. This accomplishment not only ensures optimal performance but also aligns with our commitment to delivering a dynamic and user-friendly experience. The incorporation of diverse technologies, from 3D printing to real-time communication through Socket.io, showcases our team's versatility and ability to create a well-rounded project.
## What we learned
Throughout the development process, our team gained invaluable insights into the intricacies of hardware integration and software collaboration. Overcoming the challenge of achieving the desired spinning speed for the step motor taught us the importance of creative problem-solving and adaptability in a project. Additionally, the integration of battery holders for improved pulse control deepened our understanding of hardware optimization. We also refined our skills in web development, utilizing HTML, CSS, and JavaScript to create an intuitive leaderboard accessible to users. Overall, the project has been a rich learning experience, enhancing our technical proficiency and collaborative skills.
## What's next for Hulk Smash
Looking ahead, the Hulk Smash project holds exciting possibilities for further refinement and expansion. We aim to explore additional features and enhancements, such as incorporating advanced sensors for a more responsive smashing experience. Furthermore, we plan to optimize the project for scalability, potentially exploring opportunities for integration with virtual or augmented reality platforms. The feedback garnered from user interactions with the leaderboard will inform future updates, ensuring continuous improvement and user satisfaction. As we move forward, our team is enthusiastic about evolving the Hulk Smash project into an even more engaging and innovative experience.
|
## Inspiration
We are college students who occasionally (often) miss our lectures and end up watching the recorded versions of the lectures. We wanted to devise a better way to use recorded videos to study, so we created Waffle!
## What it does
Waffle turns a video into a custom chatbot -- Users can ask questions about the content, access a summary of main ideas, request additional information, and find supporting resources online to further their exploration.
## How we built it
The backend is written in Python. First, we use Whisper-JAX to convert the audio stream into a transcript string, which we then feed to an OpenAI LLM using LangChain for document summarization and interactive generative question-answering. Additionally, we use the Metaphor API to access and provide additional resources online that are relevant to the primary content of the video. Finally, we use FastAPI to create a RESTful API service that the front-end can interact with.
For the frontend, we use React and Chakra UI. We stuck with a theme of waffles, and the color scheme derived from that.
## Challenges we ran into
Initially, we used the default Whisper API from OpenAI, but we ran into trouble due to poorly scaling runtime; fortunately, we found Whisper-JAX, which combines Whisper with Google's JAX machine learning TPU-based framework. That enabled us to have a 70x speedup on transcription prediction.
On the non-technical side, one of our members fell sick and couldn't make it, and another two of us had their flight canceled and subsequently did not arrive on campus until 2AM on Saturday. Despite this, we're grateful to still have been able to hack together!
## Accomplishments that we're proud of
We're happy to have been able to finish both our initial vision of Waffle as a basic chatbot, as well as include additional functionality for summarizing the video and retrieving relevant information from the internet using Metaphor.
## What we learned
We learned a lot more about LLM integration using LangChain -- specifically, we learned about how LangChain composes LLM calls and how we can use it to increase the velocity of AI-based development. We also gained experience using FastAPI and Render, since we had previously never used these for backend development.
## What's next for waffle
We currently have the generalized functional data pipeline set up, but it accepts videos in the form of Youtube links -- later, we will expand functionality to allow users to upload any type of video file for parsing. We also intend to incorporate a database to allow users to store their chat history for certain videos that they interact with, for later review.
|
## Inspiration
Sustainability is one of the core pillars of modern progress. We wanted to address this challenge by thinking about how we could allow for substantial improvement in sustainability by optimizing an existing system. That's why we landed on LLMs: their **meteoric rise in popularity** has changed the way millions of people search for and learn information. That being said, LLMs are **extremely inefficient**when it comes to the compute required for inference. With hundreds of millions of people relying on them for day-to-day searches, it is evident that we have reached a scale where **sustainability needs to be carefully considered**. We asked ourselves, how can we make LLMs more sustainable? Can we quantify that cost so users can understand how many resources they use/save? The key to the idea is the fact that we wanted to propose a way to **dramatically improve sustainability with almost zero-effort required** from the user's side. These are the principles that make our proposal both practical and most impactful.
## What it does
In essence, we leverage **vector embeddings to make LLMs more sustainable**. Everyday, just on chatGPT, there are over 10 million queries made. Even over a small period of time, query overlap is inevitable. Currently, LLMs run inference on every single query. This is unnecessary, especially when it comes to objective queries that are similar to one another. Instead of relying on inference by default, we **rely on vector-based similarity search** first. This approximately takes **1/15 of the compute** that normal compute would take per query on chatGPT. Now, what makes LLMs desirable is their customization of responses. We didn’t want to lose this vital component by solely relying on embedded vector search. Thus, we give the user an option if they would like more information, and this defaults to a traditional LLM query. Thus, our approach allows for sustainability that is orders of magnitude higher than before, **without compromising what people like most about LLMs**.
## How we built it
For embeddings and our vector database, we used Pinecone. Our app is created with NextJS (ReactJS, TailwindCSS, NodeJS). We utilize the OpenAI api for traditional query requests. For our similarity search, we used cosine-similarity, and given that a query crosses our significance threshold, we return top 0-3 such queries for any given user input.
## Challenges we ran into
This was our first time working with embeddings and a vector database. Thus, we had some issues with setup and adding a new embedding to the overall vector space. We wanted the space to be dynamic so that answers generated for users can be shared by all users if someone were to ask a similar query in the future. Other than that, integrating all the required APIs was a challenge as some functions were async while others weren’t which caused state-update issues. Luckily, with some debugging, we were able to sort it out.
## Accomplishments that we're proud of
Our final version is a **fully-functional prototype** of our idea. We are also astonished by the real statistics behind the resources our system can potentially save. Additionally, we took UI extremely seriously because we wanted a system that was **intuitive and appealing** for users to use. We also wanted a clear way for them to see the benefit of using our platform. We believe we have accomplished this in a simple, yet capable UI experience.
## What we learned
We learned about how to use vector embeddings for similarity search. We also learned how to tweak the confidence threshold such that the relevant responses actually match the queries we are looking for. Above all else, we learned just how many resources are used in day-to-day usage of ChatGPT. When starting this project, we had a prediction about LLM resource consumption, but we completely underestimated just how large it would be. These learnings made us realize that **our project can have even more impact** than we had anticipated.
## What's next for SustainLLM
We want to take the same processes and **apply them to other modalities** like audio and image generation. These modalities require significantly more compute than text generation, and if we could save even a small percentage of that compute, it could lead to drastic results. We are aware that creativity is a pivotal part of audio and image generation, and so we would use embeddings for lower-level things such as different pixel patterns or phonetics. That way, each generation can still be unique while consuming fewer resources.
**Let’s save the environment, one LLM query at a time :)**
|
losing
|
## Commander
Commander, the command launcher for the web.
Install it for Firefox [here](https://addons.mozilla.org/en-US/firefox/addon/commander/) and press F2 or Ctrl+E to open it!
## Motivation
Having to switch between the keyboard and mouse is painful and wastes time. Commander is an extension to control your browser with just your keyboard and/or your voice. Enhance your productivity.
## What it does
Commander is a browser extension for Chrome and Firefox.
Features:
* view and add bookmarks
* open and close tabs
* search and open links
* VOICE CONTROL
* .... and more
|
## Inspiration
There are many productivity apps that each serve a single function, like blocking unwanted websites or tracking tasks. We wanted to create an all in one extension that you could use to manage all of your productivity needs
## What it does
Our extension has a pomodoro timer, a task manager, site blocking and general notes. Site blocking and general notes are not finished yet though
## How we built it
We created a chrome extension by essentially creating a mini-website and using a manifest file to get browsers to open it as a local extension instead
## Challenges we ran into
One of the challenges that we ran into was turning the design into a working website. Since we were all new to using HTML and CSS, trying to make the website look as nice as the initial design was a huge challenge.
## Accomplishments that we're proud of
We were able to get the final extension to look good while being functional, even though it doesn't look like the initial designs
|
## Inspiration
After hearing a representative from **Private Internet Access** describe why internet security is so important, we wanted to find a way to simply make commonly used messaging platforms more secure for sharing sensitive and private information.
## What it does
**Mummify** provides in-browser text encryption and decryption by simply highlighting and clicking the Chrome Extension icon. It uses a multi-layer encryption by having both a private key and a public key. Anyone is able to encrypt using your public key, but only you are able to decrypt it.
## How we built it
Mummify is a Chrome Extension built using Javascript (jQuery), HTML, and CSS.
We did a lot of research about cryptography, deciding that we would be using asymmetric encryption with private key and public key to ensure complete privacy and security for the user. We then started to dive into building a Chrome extension, using JavaScript, JQuery and HTML to map out the logics behind our encryption and decryption extension. Lastly, we polished our extension with simple and user-friendly UI design and launched Mummify website!
We used Microsoft Azure technologies to host and maintain our webpage which was built using Bootstrap (HTML+CSS), and used Domain.com to get our domain name.
## Challenges we ran into
* What is the punniest domain name (in the whole world) that we can come up with?
* How do we make a Chrome Extension?
* Developing secure encryption algorithms.
* How to create shareable keys without defeating the purpose of encryption.
* How to directly replace the highlighted text within an entry field.
* Bridging the extension and the web page.
* Having our extension work on different chat message platforms. (Messenger, HangOuts, Slack...)
## Accomplishments that we're proud of
* Managing to overcome all our challenges!
* Learning javascript in less than 24 hours.
* Coming together to work as the Best Team at nwHacks off of a random Facebook post!
* Creating a fully-usable application in less than 24 hours.
* Developing a secure encryption algorithm on the fly.
* Learning how to harness the powers of Microsoft Azure.
## What we learned
Javascript is as frustrating as people make it out to be.
Facebook, G-mail, Hotmail, and many other sites all use very diverse build methods which makes it hard for an Extension to work the same on all.
## What's next for Mummify
We hope to deploy Mummify to the Chrome Web Store and continue working as a team to develop and maintain our extension, as well as advocating for privacy on the internet!
|
losing
|
## Inspiration
For my project I wanted to see if I could use state of the art machine learning and crowd sourcing methods to make something not only innovative and exciting, but also fun and whimsical.
Although I'm not artistically talented, I like doodling, whether it's when I'm supposed to be paying attention in a class, or for games like "Draw Something". I got the idea for this project after learning about Google Quick Draw project, where they crowdsourced millions of doodles (including some of mine!) to train a neural network, and then released the drawings for the public to use. I really liked the cartoony vibe of these drawings, and thought it would be fun to use them to make your own doodles.
## What it does
The overarching theme of the app is to create drawings using crowd-sourced doodles, and there are two main ways it does this.
First, you can add to the image using voice commands. The picture is split into an (invisible) 8x6 grid, and you can specify where you want something placed with regular Cartesian coordinates. For example, you can say "Tree at 4 dot 3", which would draw someone's doodle of a tree on square (4, 3).
If you're planning on placing a lot of the same item at once, you could instead say a command like "Frying pan across 7" which would fill row 7 with frying pans, or "Monkey down 0", which would fill the first column with monkeys. If you don’t like a doodle, you can click refresh to try a different user submitted doodle, and if you want to simply restart the whole drawing, you can say “Erase” to get a blank canvas (full of possibilities!).
The second way to create a drawing is you can paste in a link to an actual photograph, and the program will doodle-fy it! It will attempt to detect prominent objects in the photo and their location, convert those objects into doodles, and place them in the blank canvas so that they're in the same position relative to each other as in the original.
## How I built it
This program was built using Python, the GUI using PyQt, and image creation through Pillow.
I use Google QuickDraw's open dataset to access some of the millions of doodles submitted by people around the world, each corresponding to some category like "car", "tree", "foot", "sword", etc, and these are the building blocks for Doodle My World's images.
For the voice commands, I used Microsoft Azure's speech services to convert the commands into text. The program then searches the speech transcript for certain keywords, such as the object to be doodled, its coordinates, whether or not to fill a row, etc. The name of the object is fed to Google QuickDraw, and if it's matched with a doodle, it will add it to the canvas at the coordinates specified (the box coordinates are converted to pixel coordinates).
For the photo conversion, I used Azure's computer vision to analyze photos for objects and their locations. It detects prominent objects in the photo and returns their pixel coordinates, so I put in the list of objects into QuickDraw, and if a doodle was available, I placed it into the image file at the same relative coordinates as the original.
## Challenges I ran into
One of the main challenges I ran into was getting the APIs I was using to work together well. Google's QuickDraw based its data off of things that were easy to draw, so it would only identify specific, simple inputs, like "car", and wouldn't recognize Azure's more specific and complex outputs, like "stationwagon". One way I got around this, was if QuickDraw didn't recognize an object, I'd then feed in what Azure calls its "parent object" (so for a stationwagon, its parent would be car), which were often more general, and simpler.
Another issue was that QuickDraw simply didn't recognize many common inputs, a specific example of which was "Person". In this case my workaround was that whenever Azure would detect a Person, I would feed it in as a "smiley face" to QuickDraw and then draw a shirt underneath to simulate a person, which honestly worked out pretty well.
## Accomplishments that I'm proud of
This was my first time attempting a solo hackathon project, so I'm really happy that I managed to finish and reach most of my initials goals. This was also my first time creating a front end GUI and doing image manipulation with Python, so I learned a lot too!
## What's next for Doodle My World
There are a lot of voice commands that I wanted to add, but were more like stretch goals. One idea was to be able to specify more specific ranges rather than just single rows or columns, such as "Star from rows 3 to 5" or "Mountain in half of column 2". I also wanted to add a "Frame" command, so you could create a frame out of some object, like stars for instance.
Because of the disconnect between the specifics of Azure compared to the generality of QuickDraw, I'd originally planned on adding a thesaurus feature, to get related words. So for instance if QuickDraw doesn't recognize "azalea" for example, I could run it through a thesaurus app, and then feed synonyms to my program so that it could eventually recognize it as a "flower".
|
## Inspiration
The system was Mostafa's idea. In a world with transparency has become more of a public concern in recent years, it was important to provide a medium to hold charities accountable for the money they received in good faith, and to encourage people with the means to donate.
## What it does
Project Glass is a system that uses BlockChain technology to track donations given to charitable organizations that have opted-in. Each donation is given a unique "tracking key" like the kind you get on parcels to track the status of deliveries. Donors can then lookup their donation on the Project Glass website to see exactly where each dollar ended up.
It also provides suggestions for where it is best for the organization to spend money. This is driven by a machine learning algorithm that detects events on data collected on topics relevant to the NGOs in the network. The ML algorithm detects relevant events, which are then dispatched using PubSub+ to the Project Glass partner-organizations. The organizations would then be able to see a live feed of relevant data that they can use better leverage their short-term investments.
## How we built it
We use Blockchain and a proprietary currency to keep track of every dollar spent. Each invested dollar is turned into a unit of currency and tied to a transaction id. The transactions of every dollar is then logged into the blockchain from the time it is deposited till the time it is sent to an external entity (such as another NGO, or if it was used for an expense). A person with a tracking id can use it to look up the final destination of every dollar that they have spent, which adds transparency as a result. Auditors can also use this information to verify the claims of the NGO expenditure by matching their bank transactions to what they claimed in the system, this makes their job easier.
We use data gathering, AI, and PubSub+ to generate and publish events. We have a data stream that we run a time-series based machine learning algorithm which detects events. The events are then sent over a PubSub+ topic which is received by the Project Glass service and used to drive suggestions for where it is best for an organization to send money.
## Challenges we ran into
The main challenge was adoption, how do we make sure that this system can easily be adopted given the use of a new currency?
The solution is to use the new currency strictly for tracking investments dollar-to-dollar. This currency cannot be used or exchanged in any other context as it is only meant to be used to augment the existing financial system with traceability. In Project Glass, we limited the use of this currency to compiling transactions to the ledger and mapping individual investments with every contribution they eventually make.
|
## Inspiration:
The single biggest problem and bottleneck in training large AI models today is compute. Just within the past week, Sam Altman tried raising an $8 trillion dollar fund to build a large network of computers for training larger multimodal models.
|
partial
|
## Inspiration
We were inspired by [Radish](https://radish.coop), one of this year's sponsors for McHacks10, a food delivery service whose mission is to bring forth a more equitable relationship between restaurateurs, delivery drivers and consumers. We wanted to create a place where people can learn and understand others' opinions regarding various food delivery services, hopefully inspiring them to move away from large-scale delivery platforms and towards food delivery collectives that put members first. We think that's pretty rad.
## What it does
RadCompare collects tweets regarding existing food delivery services, filters out retweets and spam and runs natural language processing (NLP) on the data to gather user and employee sentiment regarding food delivery services.
## How we built it
For the backend functionality, we used Python, NumPy, pandas, Tweepy and Co:here.
For the frontend design, we used HTML/CSS and JavaScript and used Figma to design the site's logos.
## Main Challenges
* Getting permission to use the Tweepy API
* Configuring and understanding Co:here for NLP
* Our .DS\_Store being chaotic
## Accomplishments we're proud of
* Using technology to change the way people think about large-scale food delivery services
* Learning how to use Co:here for the first time
* Adapting to various challenges by using different frameworks
* The overall website design and our *rad* logo :)
## What we learned
* The power of natural language processing (and how cool it is)!
* More about Radish's mission and values and how they're changing the game when it comes to food delivery services
## What's next for RadCompare
* Full integration of the frontend and backend
* Expanding our website
|
## Inspiration
Our team wanted to address the growing concern for young adults and college students who are becoming increasingly dependent on food-delivery services like DoorDash and UberEats. This creates unhealthy habits of eating, which will scale into an unfavorable lifestyle in the future.
For convenience's sake, we wanted to build a custom user-centric platform that is simple enough that even the least culinary-inclined could embrace the process of preparing a meal that you find delicious.
## What it does
MealPrep gathers dishes from the keywords of your choice, steps you through the process of cooking with a voice-dictated AI chatbot, and bookmarks your favorite foods for next time.
## How we built it
Built with React.js on the frontend, supported by Vite.js and Tailwind.css, packaged with Node.js, and deployed to Vercel. On the backend, we're using Javascript with Firebase user-authentication and Firestore to store user bookmarks. Our user experience is derived from Groq.AI's lightning responsive LLM responses, as well as Cartesia AI's voice-to-text dictation.
## Challenges we ran into
* Attempting to deploy on industry-standard technologies like AWS / Docker
* Trying to integrate Linter in a CI/CD pipeline using GitHub Actions
* Adding/removing data using Firestore
## Accomplishments that we're proud of
Creating a unified user-interface that reflects our values and simplistic, modern product vision.
Adding dynamic animations that create a smooth navigational experience.
Having integrated many technologies that all overlap with each other, consistently researched the docs to see what works best for our code. In the end, our technologies all fit together like a beautiful piece of the puzzle.
## What we learned
API's are difficult to implement and wrangle around, and sometimes diving too deep into one solution isn't the most optimal workflow. Always remember to think of different solutions and take yourself outside of the box sometimes.
Having a cycle of communication, delegating tasks, and single-focus times worked well for our team's productive workflow. It felt like mini-sprints which organized and allowed us to adapt to our own roles on the team.
Taking a step back from coding to brainstorm, draw out our ideas, and have fun with the process is so important to the team's cohesive vision. It allows us to remain on the same page.
## What's next for MealPrep
Features like ingredient overlap detection would focus on giving the users a way to see which of their favorite recipes can be built using the same ingredients.
Engagement is an important factor for any platform. To add engagement, we want to focus on shifting our platform to social media. Features like a daily popular meal, or community postings where people can share their experiences and advice on meal-prepping would be a fantastic addition. On top of our default guides, we want there to be more interactivity with guides created by users and brands.
Speaking of brands, having companies promote their meal-prep-friendly products on our page would be a great monetization strategy.
|
Team channel #43
Team discord users - Sarim Zia #0673,
Elly #2476,
(ASK),
rusticolus #4817,
Names -
Vamiq,
Elly,
Sarim,
Shahbaaz
## Inspiration
When brainstorming an idea, we concentrated on problems that affected a large population and that mattered to us. Topics such as homelessness, food waste and a clean environment came up while in discussion. FULLER was able to incorporate all our ideas and ended up being a multifaceted solution that was able to help support the community.
## What it does
FULLER connects charities and shelters to local restaurants with uneaten food and unused groceries. As food prices begin to increase along with homelessness and unemployment we decided to create FULLER. Our website serves as a communication platform between both parties. A scheduled pick-up time is inputted by restaurants and charities are able to easily access a listing of restaurants with available food or groceries for contactless pick-up later in the week.
## How we built it
We used React.js to create our website, coding in HTML, CSS, JS, mongodb, bcrypt, node.js, mern, express.js . We also used a backend database.
## Challenges we ran into
A challenge that we ran into was communication of the how the code was organized. This led to setbacks as we had to fix up the code which sometimes required us to rewrite lines.
## Accomplishments that we're proud of
We are proud that we were able to finish the website. Half our team had no prior experience with HTML, CSS or React, despite this, we were able to create a fair outline of our website. We are also proud that we were able to come up with a viable solution to help out our community that is potentially implementable.
## What we learned
We learned that when collaborating on a project it is important to communicate, more specifically on how the code is organized. As previously mentioned we had trouble editing and running the code which caused major setbacks.
In addition to this, two team members were able to learn HTML, CSS, and JS over the weekend.
## What's next for us
We would want to create more pages in the website to have it fully functional as well as clean up the Front-end of our project. Moreover, we would also like to look into how to implement the project to help out those in need in our community.
|
losing
|
## Inspiration
With an increasing demand for therapists and mental health services and a limited supply of these resources, mental health has become a growing issue plaguing Canada, especially our youth. An estimated 1.6 million children and teenagers suffer from mental health issues, yet only 20% will receive adequate care to address this. The goal of “Deer, Journal” is to create an approachable, child-friendly resource for children to develop their self-reflective ability, their goal setting habits, and mental-health coping strategies so they are comfortable with vulnerability and becoming the best version of themselves.
## What it does
“Deer, Journal” provides an approachable, child-friendly way for children to get into the habit of journaling, practicing self-reflection, and becoming vulnerable. Through interacting with a deer stuffie, children can recount their day and express their daily feelings through guided, AI-powered prompts. The prompts are child-friendly and adapted depending on the child’s daily conversation history. Additionally, the user can tell the AI their goals for the week and the AI will create a unique, weekly vision board catered towards that child.
## How we built it
“Deer, Journal” is primarily built off the OpenAI API to create unique “turbo-gpt-3.5” chatbots that are prompt-engineered towards children in their responses and the questions they are asked. The vision board image generation uses the DALL-E-3 image generation model from the OpenAI API. The entire app is built off React native, Node.JS, and Express.JS.
## Challenges we ran into
The biggest problems that we ran into were backend errors. Most of us do not have much experience with the backend so we spent a lot of time trying to understand and trying to integrate and connect everything to each other. Furthermore, the OpenAI API documentation was often wrong so it took a lot of guessing to figure out the correct syntax. Plus, we were too ambitious with the amount of features we wanted to add to “Deer, Journal” which made us focus on features that were much less important to the overall product. Also, a last-minute change caused us to convert all our OpenAI API programs from Python to JavaScript.
## Accomplishments that we're proud of
We are proud of the fact that we got a functioning app and got the key features down. Additionally, the UI turned out much better than expected and all OpenAI API integration was highly successful.
# What we learned
We learned that next time a majority of us need to better familiar ourselves with the backend because its important was certaintely undermined originally. Plus, we need to avoid feature creeping and trying to add too many features at once without tackling the primary use and functionality of the product.
# What's next for Deer, Journal
What’s next for “Deer, Journal” is optimizing its primary AI-prompted journaling features and eventually adding the features we originally wanted to add. This includes computer vision to better recognize facial features so that mood analysis can be completed, and computer vision and help guide users through grounding games.
|
## Inspiration
We looked at some of the sponsored prizes for some ideas as to what our hackathon project could be about, and we noticed that TELUS would be sponsoring a prize for a mental health related project. We thought that this would be an excellent idea, especially so due to the current situation. Because of the global pandemic, people have been more socially isolated than before. Many students in particular have been struggling with online schooling, which further takes a toll on their mental health. In general, mental health is a bigger issue than ever before, which is why we thought that creating a project about it would be appropriate. With all of that in mind, we decided that creating some sort of application to promote positivity hopefully create some inspiration and order in everyone's lives during these uncertain times would be our overall goal.
## What it does
Ponder is an app which allows the user to write journal entries about whatever they're grateful for. This prompted self reflection is meant to have the user try to stay optimistic and see the bright side of whatever situation they may be in. It also provides inspiring quotes and stories for the user to read, which can hopefully motivate the user in some way to self-improve. Lastly, there is also a to-do list function, which is meant to help keep the user on task and focused, and allow them to finish whatever work they may need to do before they fall behind and create more stress for themselves.
## How we built it
We built the frontend using React Native, using Expo CLI. Most of the project was frontend related, so most of it was made using React Native. We had a bit of backend in the project as well, which was done using SQLite.
## Challenges we ran into
One major challenge we ran into was the fact that most of us had no experience using React Native, and most of our project ended up using React Native. Because of this, we had to learn on the fly and figure things out, which slowed down the development process. Another challenge was time zone differences, which caused some time conflicts and inconveniences.
## Accomplishments that we're proud of
Overall, we are proud that we were able to develop Ponder to the extent that we did, especially considering most of us had no experience using React Native. We are proud of our app's front end design, and the project idea is something that we're proud of as well, since we were able to think of features that we hadn't ever seen in any other app which we could try and implement.
## What we learned
Many of us learned how to use React Native this weekend, and for most of us, it was our first hackathon. We learned what it's like being under time pressure and trying to collaborate and work towards a goal, and we learned about the challenges but also how rewarding such an experience could be.
## What's next for Ponder
Next for Ponder, with more time, we could add even more features, such as things for tracking mood. We could also improve some functionality of some of the existing features, given a bit more time.
|
## Inspiration
As students, we understand the stress that builds up in our lives. Furthermore, we know how important it is to reflect on the day and plan how to improve for tomorrow. It might be daunting to look for help from others, but wouldn't an adorable dino be the perfect creature to talk to your day about? Cuteness has been scientifically proven to increase happiness, and our cute dino will always be there to cheer you up! We want students to have a way to talk about their day, get cheered up by a cute little dino buddy, and have suggestions on what to focus on tomorrow to increase their productivity! DinoMind is your mental health dino companion to improve your mental wellbeing!
## What it does
DinoMind uses the power of ML models, LLMs, and of course, cute dinos (courtesy of DeltaHacks of course <3)!! Begin your evening by opening DinoMind and clicking the **Record** button, and tell your little dino friend all about your day! A speech-to-text model will transcribe your words, and save the journal entry in the "History" tab. We then use an LLM to summarize your day for you in easy to digest bullet points, allowing you to reflect on what you accomplished. The LLM then creates action items for tomorrow, allowing you to plan ahead and have some sweet AI-aided productivity! Finally, your dino friend gives you an encouraging message if they notice you're feeling a bit down thanks to our sentiment analysis model!
## How we built it
Cloudflare was our go-to for AI/ML models. These model types used were:
1. Text generation
2. Speech-to-text
3. Text classification (in our case, it was effectively used for sentiment analysis)
We used their AI Rest API, and luckily the free plan allowed for lots of requests!
Expo was the framework we used for our front-end, since we wanted some extra oomph to our react native application.
## Challenges we ran into
A small challenge was that we really really wanted to use the Deltahacks dino mascots for this year in our application (they're just so cute!!). But there wasn't anything with each one individually online, so we realized we could take photos of the shirts and extra images of the dinos from that!!
As for the biggest challenges, that was integrating our Cloudflare requests with the front-end. We had our Cloudflare models working fully with test cases too! But once we used the recording capabilities of react native and tried sending that to our speech-to-text model, everything broke. We spent far too long adding `console.log` statements everywhere, checking the types of the variables, the data inside, hoping somewhere we'd see what the difference was in the input from our test cases and the recorded data. That was easily our biggest bottleneck, because once we moved past it, we had the string data from what the user said and were able to send it to all of our Cloudflare models.
## Accomplishments that we're proud of
We are extremely proud of our brainstorming process, as this was easily one of the most enjoyable parts of the hackathon. We were able to bring our ideas from 10+ to 3, and then developed these 3 ideas until we decided that the mental health focused journaling app seemed the most impactful, especially when mental health is so important.
We are also proud of our ability to integrate multiple AI/ML models into our application, giving each one a unique and impactful purpose that leads to the betterment of the user's productivity and mental wellbeing. Furthermore, majority of the team had never used AI/ML models in an application before, so seeing their capabilities and integrating them into a final product was extremely exciting!
Finally, our perseverance and dedication to the project carried us through all the hard times, debugging, and sleepless night (singular, because luckily for our sleep deprived bodies, this wasn't a 2 night hackathon). We are so proud to present the fruits of our labour and dedication to improving the mental health of students just like us.
## What we learned
We learned that even though every experience we've had shows us how hard integrating the back-end with the front-end can be, nothing ever makes it easier. However, your attitude towards the challenge can make dealing with it infinitely easier, and enables you to create the best product possible.
We also learned a lot about integrating different frameworks and the conflicts than can arise. For example, did you know that using expo (and by extension, react native), you make it impossible to use certain packages?? We wanted to use the `fs` package for our file systems, but it was impossible! Instead, we needed to use the `FileSystem` from `expo-file-system` :sob:
Finally, we learned about Cloudflare and Expo since we'd never used those technologies before!
## What's next for DinoMind
One of the biggest user-friendly additions to any LLM response is streaming, and DinoMind is no different. Even ChatGPT isn't always that fast at responding, but it looks a lot faster when you see each word as it's produced! Integrating streaming into our responses would make it a more seamless experience for users as they are able to immediately see a response and read along as it is generated.
DinoMind also needs a lot of work in finding mental health resources from professionals in the field that we didn't have access to during the hackathon weekend. With mental wellbeing at the forefront of our design, we need to ensure we have professional advice to deliver the best product possible!
|
losing
|
## Inspiration
With the recent Supreme Court decision to reject the North Carolina state legislature's redistricting, we desired to inform and engage our fellow citizens with data visualizations and redistricting simulation.
## What it does
Through mesh simplification with Bresenham's line algorithm, the app can compute and unionize complex shapefile MultiPolygons into a fishnetted vector space. This vertex simplicity creates an empowering experience for the user, in which the user can explore both how the system works, as well as how open it is to abuse. Gerry allows users and players to experience the realities of one of the most important (yet least understood) aspects of our political system.
## How we built it
* Used Python to fishnet shapefile MultiPolygons
* Used past election data of U.S. representatives from the University of Michigan's Institute for Social Research to determine political alignments with district (including non-legislative) precision
* Used React and Redux to build the client-facing application
## Challenges we ran into and Accomplishments that we're proud of
* The mesh simplification algorithm was by far our greatest hurdle. It required the use of Bresenham's line algorithm, an algorithm that determines the points of a raster in N-space that should be selected to form a close approximation to a straight line between two points.
+ Due to the sheer size of shapefiles and the amount of MultiPolygons that needed processing, the algorithm required several hours of optimizing and mesh-precision tweaking
* Collecting data on representative elections took several hours of research
* Much of the React front-end had to be refactored to account for the innumerable amount of fishnetted cells that visualized a state/country.
* The precision setting on the mesh would sometimes create "holes" in whole columns or rows.
* MERGING! We only had Polygon shapes to the Congressional districts, and merging these districts into states, and then America, took a lot of whiteboarding and algorithm design.
## What we learned
* Sometimes committing broken code is for the best. Our team member, Patrick Chan, learned this the hard way when he accidentally overwrote three hours of our mesh simplification algorithm with an earlier backup.
## What's next for Gerry
|
## *OneVote can change the world.*
## What it does
While we see our generation more informed and engaged in politics than ever, millennials and us Gen Z often fail to actually show up to the polls. A recent poll found only 28% of young adults 18–29 were certain they’d vote in the midterms, vs 74% of seniors.
Our future is at stake, and it’s in our hands. Issues like climate change and gun control are vital to our very future, and social issues like abortion rights and immigration are a pressure point to many.
Meanwhile, many historians are calling the midterm elections this fall some of the most critical this country has had in decades. Every vote counts.
## How we built it
We used Next.js, a server-rendering framework around React, Facebook’s UI component rendering library, for our frontend.
The backend uses Express and the database Postgres, combined with several custom web scrapers for assembling other data.
## Challenges we ran into
We had to figure out how best to integrate multiple information sources, including Google’s Civic Information API, into our project.
Using an asynchronous programming language gave us a challenge in that we had to handle loading lots of data at once.
## Accomplishments that we're proud of
It was important to us to not to make OneVote party-specific, so the project is entirely politically neutral.
Writing the API was one team member’s first project in Node.js! We all were learning new frameworks as we built this.
## What we learned
We learned quite a bit about the struggles of building under a time crunch. Being flexible with development is important, especially in today's fast-paced world.
## What's next for OneVote
We plan to launch OneVote as a full webapp within the next few weeks and hopefully will receive some press coverage. We want to get this in the hands of young voters as soon as possible.
|
## Inspiration
We were inspired to create this site to raise the importance of being a global citizen in our modern world. Our biggest goal was to create a site that would achieve just that.
## What it does
Our site serves as a beacon of knowledge for those looking for an educational tool or more importantly, those looking to become global citizens. Our site allows users to search for countries or US states and see them on a map visually along with learning important basic, demographic, and cultural information about the country or US state.
## How we built it
For the backend, we used Flask as a base. This allowed us to use POST and GET requests for the Wolfram|Alpha Short Answers API that would accept queries for specific information on countries around the world along with US states. The logic for the APIs was written in Python and JavaScript. On top of this, Flask allowed us to host the site. To build out the frontend, we used a combination of HTML, CSS, and JavaScript.
## Challenges we ran into
Our biggest challenge was figuring out how to use Flask to actually send POST and GET request for the Wolfram|Alpha Short Answers API. We eventually figured out how to pipe the output of the Short Answers API to the frontend using a combination of our knowledge in Python and JavaScript.
## Accomplishments that we're proud of
We are so proud that we managed to build a full-stack website in just a day! We achieved our mission of serving as a beacon of knowledge for users anywhere.
## What we learned
The most important thing we learned was how to use Flask for websites, and more specifically, in the context of calling APIs and piping the results to the frontend. On top of this, we sharpened our skills in JavaScript, HTML, and CSS while designing the frontend pages for aesthetic results!
## What's next for MAPIFY
Our next goal is to add a sharing function on the site to allow users to share the information they receive with others. On top of this, we plan to add a way for users to ask specific questions about a given US state or country.
|
losing
|
## Inspiration
My colleagues and parents inspired me to build this project
Introduction :-
Restaurant is a kind of business that serves people all over the world with ready made food. Many restaurants have a lot of difficulty in managing the business such as customer ordering and reservation of tables. By using manual customer ordering it is difficult for the waiter to keep the correct customer information and the information may get lost. The current system (manual system) is not effective and efficient to use anymore. The current system cannot save, manage and monitor the restaurant methodically enough. We need a better restaurant management system in place. This system is developed to automate day to day activity of a restaurant and provide service facilities to restaurants and also to the customer.
Application :-
The main point of developing this system is to help restaurant administrators manage the restaurant business and help customers for ordering and reserving tables digitally.This restaurant management system can be used by employees in a restaurant to handle the clients, their orders and can help them easily find free tables or place orders. Digital restaurant management systems will help the restaurant administrator to manage restaurant management more efficiently and provide various services to the customers that would make their experience more delightful.
Services Provided :-
This restaurant management system provide various services like :-
● Reservation Service
Customers can reserve the table that they want as well as access their already reserved table. The system also informs the customer whether the table that they have selected is available for reservation or not.
● Display of Menu
The system displays a systematic menu of the items available at the restaurant along with the price of the respective item. The menu is organised into various categories such as salads, desserts, soups, beverages etc.
● Dine In
If a customer does not have an already reserved table, the system allows the customer to select an available table for them to dine in at.
● Ordering
The system allows the client to place their orders easily. The client is allowed to keep ordering and reordering items from the menu as many times they want and stop when they are done and don’t want to place any further orders.
● Bill Generation
The system calculates and generates a systematic bill according to the price and quantity of items ordered by the customer. It also allows the user to use their coupons if they have any and get a discount on the amount that they have to pay.
● Accepts Information from the User
The system accepts the customer’s important information like name, contact number, address and the mode of payment and prints this information on their bill.
● About the Restaurant
The system allows the user to access certain information about the restaurant like the name of the owner, manager, year of establishment etc.
Project Abstract:
Although graphics has not been used in this project, the application of user defined functions and structures have been effectively used here. The major user defined functions used in this C project are:
• Void choose()
• Void seats()
• Void showMenu()
• Void Display()
• Void BillGenerator()
Customer Billing System application is so simple to use. In order to use the application, click at the exe file and then, you will have three options to:
1. dine-in
2. Reservation of seat
3. Already reserved seat
4. About us
As per your need, enter 1, 2, 3,or 4 and follow the instructions provided by the application itself.
Lines of code:- 995
## How we built it
I built it through C programming Language .I uses codeblock as a platform for writing and running my code
## Challenges we ran into
various challanges faces by me such that sometime bug are arrived in between programs and 1-2 hours are spent in resolving this bugs
## Accomplishments that we're proud of
This is my first project in C programming language which is best experience for me to begin
## What we learned
i learned that doing programming is a interested part in programmers life it can patch a programmer to real world problems which i can show in our surroundings
## What's next for Restaurant bill Management System
Next i am making project in Python Virtual Assistant system(AI) which can recognize my command and work as per requirement
|
## Inspiration
Therapy should be as acceptable as any other doctor's appointment. Physical and mental health go hand in hand at maintaining an overall well-being of a person. Moreover, the [rising levels](https://www.healthline.com/health-news/millennial-depression-on-the-rise) of depression and other mental disorders in millennials portray an ever-increasing need for accessible methods of therapeutic treatment.
## What it does
WellNote is a unique, holistic solution that enables any individual to practice mental well-being and monitor their behaviour over a period of time. It is not meant to replace a therapist, but rather to normalize the practice of self-reflection and provide an alternate outlet that can process your responses and give you meaningful feedback, such as helpful life tips, previously unknown sources of stress, and progress tracking.
## Accomplishments that we're proud of
It was an awesome learning experience to play around with a variety of language and tools, and we had a lot of fun!
|
## Inspiration
We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students
## What it does
The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid.
## How we built it
React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons
## Challenges we ran into
React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native
## What we learned
New exposure APIs and gained experience on linking tools together
## What's next for Scrappy.io
Improvements to the web scraper, potentially expanding beyond restaurants.
|
partial
|
## Inspiration
Giphy prize, rarjpeg steganography.
## What it does
You write a message and select a Giphy GIF to hide the message in. The app produces an identical GIF with the message hidden inside it.
## How I built it
It's a client-side single-page web app. It uses a peculiarity of the GIF format to embed the text.
## Challenges I ran into
Client-side GIF manipulation.
## Accomplishments that I'm proud of
Building an MVP in a limited time.
## What I learned
TypeScript, GIF format specification.
## What's next for Gifcryption
1. Add another GIF that will serve as an encryption key.
2. Encode the message in the least significant bits of frame pixels.
|
## Inspiration
>
> As a group of mostly women, we all collectively noticed that due to our passive and introverted personalities, we struggle with confronting others about the money they owe us. A friend had shared with us an article that women tend to agree to more in the workplace, mundane activities, and tend to struggle with saying no. We also noticed this issue in our financial lives where we let our friend's debts "slide" due to fear of being seen as a nuisance.
>
>
>
## What it does
>
> It is an android app that tracks transactions inputted by users that are lending their money.
> It does the following:
> -Create groups that allow you to split transactions.
>
>
> -Add contacts and search of users that are already using the app.
>
>
> -Lets you add transactions and uncheck mark the users.
>
>
> -Allocate fractions of the amount owing per individual.
>
>
> -Net costs are shown for each individual based on a breakdown.
>
>
> -To checkmark off users that have already paid.
>
>
> -Sends weekly notifications of money owing to those who are still check marked.
>
>
>
## How we built it
>
> Cash collect was built on android studio using java and XML.
> This app connects to Firebase and utilizes its authentication tools and real-time database.
>
>
>
## Challenges we ran into
>
> Not all the members have used android-studio so there was a slight learning curve. The debugging process was time-consuming and limited the number of features that we wanted to implement.
>
>
>
## Accomplishments that we're proud of
>
> We're all really happy about the product we built, as for most of us (3 out of 4) it was our first Hackathon! Completing the Hackathon in itself and experiencing what it is like to create, implement, and present a project all in a short 36 hour period is something we're all super proud of. Overall, how to implement databases and use Android Studio is a huge accomplishment for all of us.
>
>
>
## What we learned
>
> We developed skills using Android Studio while for some of us Java and XML were also a new language for us. We learned how to work in a team and divide our work based on skills. We also learned how to manage time efficiently and the importance of project planning and design flow.
>
>
>
## What's next for Cash Collect
>
> Cash Collect is still young, but we would love to see the following implemented for our app:
>
>
> -Implement a contact database to support real-time transaction tracking
>
>
> -Secure data scraping from banks to have an automatic list of transactions based on a specified time frame.
>
>
> -Expanding to business applications such as charging interest based on overdue invoices.
>
>
>
|
## Inspiration
We both came to the hackathon without a team and couldn't generate ideas about our project in the Hackathon. So we went to 9gag to look into memes and it suddenly struck us to make an Alexa skill that generates memes. The idea sounded cool and somewhat within our skill-sets so we went forward with it. We procrastinated till the second day and eventually started coding in the noon.
## What it does
The name says it all, you basically scream at Alexa to make memes (Well not quite, speaking would work too) We have working website [www.memescream.net](http://www.memescream.net) which can generate memes using Alexa or keyboard and mouse. We also added features to download memes and share to Facebook and Twitter.
## How we built it
We divided the responsibilities such that one of us handled the entire front-end, while the other gets his hands dirty with the backend. We used HTML, CSS, jQuery to make the web framework for the app and used Alexa, Node.js, PHP, Amazon Web Services (AWS), FFMPEG to create the backend for the skill. We started coding by the noon of the second day and continued uninterruptible till the project was concluded.
## Challenges we ran into
We ran into challenges with understanding the GIPHY API, Parsing text into GIFs and transferring data from Alexa to web-app. (Well waking up next day was also a pretty daunting challenge all things considered)
## Accomplishments that we're proud of
We're proud our persistence to actually finish the project on time and fulfill all requirements that were formulated in the planning phase. We also learned about GIPHY API and learned a lot from the workshops we attended (Well we're also proud that we could wake up a bit early the following day)
## What we learned
Since we ran into issues when we began connecting the web app with the skill, we gained a lot of insight into using PHP, jQuery, Node.js, FFMPEG and GIPHY API.
## What's next for MemeScream
We're eager to publish the more robust public website and the Alexa skill-set to Amazon.
|
losing
|
## Inspiration
* Web3 is in it's static phase. Contracts are not dynamic. You call a contract, it performs an action and it stops. This is similar to the read-only version of web2. Web2 became dynamic with the advent of APIs and webhooks. Apps could call each other, share events and trigger actions. This is missing in web3.
* Since this is missing in web3, the potential usecases of smart contracts are restricted. If you want to perform an on-chain action based on a trigger/event, it's extremely hard and expensive right now.
* This is a larger infrastructure problem and efforts have been made in the past to solve but all the solutions fall short on the width they cover and are constrained for the particular applications they are built for.
* Therefore, we decided build Relic which uses hybrid combinations of on-chain/off-chain technologies to solve this problem.
## What it does
* Our core product is a trigger-based automation framework. dApps/developers or individual users can come to Relic, define a trigger (smart contract method invocations, emitting of events, etc.), define an off-chain (custom API calls & webhooks) or on-chain (contract calls, funds transfer, etc.) action and link them together.
* For instance, you can trigger liquidation of a particular asset on Uniswap if it drops below certain price. Another example would be to transfer your deployed investments from Compound to Aave according to the APYs or passing of a governance proposal in their DAOs. Personally, we have used Relic to track transfer in ownership our favourite ENS domains to snipe them whenever available :)
## How we built it
* Relic listens to every block being deployed on the chain (ethereum mainnet, polygon mainnet, sonr, avalanche) and use our custom built query language to parse them.
* We match the transactions with parameters mentioned in "triggers" setup by users and perform the required off-chain or on-chain actions.
* For on-chain actions, we provide each user with a semi-custodial wallet. Each of the contract invocations mentioned are signed by that wallet, and user just have to fund it to manage the gas fees.
## Challenges we ran into
* Making sure we don't miss on any blocks being mined and transactions executed - we implemented fail/safe across our architecture.
* Implementing a safe semi-custodial wallet for high stakes automations was a challenge, we had to deep dive into encryption models.
## Accomplishments that we're proud of
Relic is intrinsically cross-chain by nature. Your triggers can be on any chain, independent of where you want to perform the action.
## What we learned
We learnt more about blockchain nodes and understanding the nature of blocks and headers. The nuances and thus benefits of each chain we worked with were fascinating. While designing our architecture, we implemented robust mechanism to eternally run tasks and jobs.
## What's next for Relic
We envision Relic solving major difficulties across web3 applications and use-cases, in proximate future. We plan on transforming into a fully featured framework which works on any chain and is highly composable.
|
# Ether on a Stick
Max Fang and Philip Hayes
## What it does
Ether on a Stick is a platform that allows participants to contribute economic incentives for the completion of arbitrary tasks. More specifically, it allows participants to pool their money into a smart contract that will pay out to a specified target if and only if a threshold percentage of contributors to the pool, weighted by contribution amount, votes that the specified target has indeed carried out an action by a specified due date.
Example: A company pollutes a river, negatively affecting everyone nearby. Residents would like the river to be cleaned up, and are willing to pay for it, but only if the river is cleaned up. Solution: Residents use Ether on a Stick to pool their funds together that will pay out to the company if and only if a specified proportion of contributors to the pool vote that the company has indeed cleaned up the river.
## How we built it
Ether on a Stick implements with code a game theoretical mechanism called a Dominant Assurance Contract that coordinates the voluntary creation of public goods in the face of the free rider problem. It is a decentralized app (or "dapp") built on the Ethereum network, implementing a "smart contract" in Serpent, Ethereum's Python-like contract language. Its decentralized and trustless nature enables the creation of agreements without a 3rd party escrow who can be influenced or corrupted to determine the wrong user.
## Challenges
The first 20 hours of the hackathon were mostly spent setting up and learning how to use the Ethereum client and interact with the network. A significant portion was also spent planning the exact specifications of the contract and deciding what mechanisms would make the network most resistant to attack. Despite the lack of any kind of API reference, writing the contract itself was easier, but deploying it to Ethereum testnet was another challenge, as large swaths of the underlying technology hasn't been built yet.
## What's next for Ether on a Stick
We'd like to take a step much closer to a game-theoretically sound system (don't quote us, we haven't written a paper on it) by implementing a sort of token-based reputation system, similar to that of Augur. In this system, a small portion of pooled funds are set aside to be rewarded to reputation token bearing oracles that correctly vote on outcomes of events. "Correctly voting" means voting with the majority of the other randomly selected oracles for a given event. We would also have to restrict events to only those which are easily and publically verifiable; however, by decoupling voting from contribution, this bypasses a Sybil attack wherein malicious actors (or the contract-specified target of the funds) can use a large amount of financial capital to sway the vote in their favor.
|
## Inspiration
We were inspired by how there were many instances of fraud with regards to how donations were handled. It is heartbreaking to see how mismanagement can lead to victims of disasters not receiving the help that they desperately need.
## What it does
TrustTrace introduces unparalleled levels of transparency and accountability to charities and fundraisers. Donors will now feel more comfortable donating to causes such as helping earthquake victims since they will now know how their money will be spent, and where every dollar raised is going to.
## How we built it
We created a smart contract that allowed organisations to specify how much money they want to raise and how they want to segment their spending into specific categories. This is then enforced by the smart contract, which puts donors at ease as they know that their donations will not be misused, and will go to those who are truly in need.
## Challenges we ran into
The integration of smart contracts and the web development frameworks were more difficult than we expected, and we overcame them with lots of trial and error.
## Accomplishments that we're proud of
We are proud of being able to create this in under 36 hours for a noble cause. We hope that this creation can eventually evolve into a product that can raise interest, trust, and participation in donating to humanitarian causes across the world.
## What we learned
We learnt a lot about how smart contract works, how they are deployed, and how they can be used to enforce trust.
## What's next for TrustTrace
Expanding towards serving a wider range of fundraisers such as GoFundMes, Kickstarters, etc.
|
partial
|
We were inspired by teenagers and adults who have so much free time but don't do anything good with it!
Our application asks the user for when he/she is free and then offers activities tailored for that individual.
We used android native technologies and web services to get the data.
What was supposed to be a google calendar application turned into this app because or some problems we ran into with the developer console, fortunately we were able to spin it off as another app despite the technical impossibilities we faced.
We are very proud of actually submitting a working app in this hackathon. Also to have worked in a team with people who never met each other before and mostly for being participants in the greatest hackathon in the World!
How much a project can deviate from the original idea, and what it takes to keep the application viable despite difficulties. There are always going to be blocks on the road, the experience and wisdom to overtake them is what really matters.
BookMeUp will be updated to be fully, 100% automatic, no user input necessary.We feel this is the worst part of using a personal assistant, which is what we want BookMeUp to evolve into.
|
## Inspiration
Picture this, I was all ready to go to Yale and just hack away. I wanted to hack without any worry. I wanted to come home after hacking and feel that I had accomplished a functional app that I could see people using. I was not sure if I wanted to team up with anyone, but I signed up to be placed on a UMD team. Unfortunately none of the team members wanted to group up, so I developed this application alone.
I volunteered at Technica last week and saw that chaos that is team formation and I saw the team formation at YHack. I think there is a lot of room for team formation to be more fleshed out, so that is what I set out on trying to fix. I wanted to build an app that could make team building at hackathons efficiently.
## What it does
Easily set up "Event rooms" for hackathons, allowing users to join the room, specify their interests, and message other participants that are LFT (looking for team). Once they have formed a team, they can easily get out of the chatroom simply by holding their name down and POOF, no longer LFT!
## How I built it
I built a Firebase server that stores a database of events. Events hold information obviously regarding the event as well as a collection of all members that are LFT. After getting acquainted with Firebase, I just took the application piece by piece. First order of business was adding an event, and then displaying all events, and then getting an individual event, adding a user, deleting a user, viewing all users and then beautifying with lots of animations and proper Material design.
## Challenges I ran into
Android Animation still seems to be much more complex than it needs to be, so that was a challenge and a half. I want my applications to flow, but the meticulousness of Android and its fragmentation problems can cause a brain ache.
## Accomplishments that I'm proud of
In general, I am very proud with the state of the app. I think it serves a very nifty core purpose that can be utilized in the future. Additionally, I am proud of the application's simplicity. I think I set out on a really solid and feasible goal for this weekend that I was able to accomplish.
I really enjoyed the fact that I was able to think about my project, plan ahead, implement piece by piece, go back and rewrite, etc until I had a fully functional app. This project helped me realize that I am a strong developer.
## What I learned
Do not be afraid to erase code that I already wrote. If it's broken and I have lots of spaghetti code, it's much better to try and take a step back, rethink the problem, and then attempt to fix the issues and move forward.
## What's next for Squad Up
I hope to continue to update the project as well as provide more functionalities. I'm currently trying to get a published .apk on my Namecheap domain (squadupwith.me), but I'm not sure how long this DNS propagation will take. Also, I currently have the .apk to only work for Android 7.1.1, so I will have to go back and add backwards compatibility for the android users who are not on the glorious nexus experience.
|
## Inspiration
During our brainstorming phase, we cycled through a lot of useful ideas that later turned out to be actual products on the market or completed projects. After four separate instances of this and hours of scouring the web, we finally found our true calling at QHacks: building a solution that determines whether an idea has already been done before.
## What It Does
Our application, called Hack2, is an intelligent search engine that uses Machine Learning to compare the user’s ideas to products that currently exist. It takes in an idea name and description, aggregates data from multiple sources, and displays a list of products with a percent similarity to the idea the user had. For ultimate ease of use, our application has both Android and web versions.
## How We Built It
We started off by creating a list of websites where we could find ideas that people have done. We came up with four sites: Product Hunt, Devpost, GitHub, and Google Play Store. We then worked on developing the Android app side of our solution, starting with mock-ups of our UI using Adobe XD. We then replicated the mock-ups in Android Studio using Kotlin and XML.
Next was the Machine Learning part of our solution. Although there exist many machine learning algorithms that can compute phrase similarity, devising an algorithm to compute document-level similarity proved much more elusive. We ended up combining Microsoft’s Text Analytics API with an algorithm known as Sentence2Vec in order to handle multiple sentences with reasonable accuracy. The weights used by the Sentence2Vec algorithm were learned by repurposing Google's word2vec ANN and applying it to a corpus containing technical terminology (see Challenges section). The final trained model was integrated into a Flask server and uploaded onto an Azure VM instance to serve as a REST endpoint for the rest of our API.
We then set out to build the web scraping functionality of our API, which would query the aforementioned sites, pull relevant information, and pass that information to the pre-trained model. Having already set up a service on Microsoft Azure, we decided to “stick with the flow” and build this architecture using Azure’s serverless compute functions.
After finishing the Android app and backend development, we decided to add a web app to make the service more accessible, made using React.
## Challenges We Ran Into
From a data perspective, one challenge was obtaining an accurate vector representation of words appearing in quasi-technical documents such as Github READMEs and Devpost abstracts. Since these terms do not appear often in everyday usage, we saw a degraded performance when initially experimenting with pretrained models. As a result, we ended up training our own word vectors on a custom corpus consisting of “hacker-friendly” vocabulary from technology sources. This word2vec matrix proved much more performant than pretrained models.
We also ran into quite a few issues getting our backend up and running, as it was our first using Microsoft Azure. Specifically, Azure functions do not currently support Python fully, meaning that we did not have the developer tools we expected to be able to leverage and could not run the web scraping scripts we had written. We also had issues with efficiency, as the Python libraries we worked with did not easily support asynchronous action. We ended up resolving this issue by refactoring our cloud compute functions with multithreaded capabilities.
## What We Learned
We learned a lot about Microsoft Azure’s Cloud Service, mobile development and web app development. We also learned a lot about brainstorming, and how a viable and creative solution could be right under our nose the entire time.
On the machine learning side, we learned about the difficulty of document similarity analysis, especially when context is important (an area of our application that could use work)
## What’s Next for Hack2
The next step would be to explore more advanced methods of measuring document similarity, especially methods that can “understand” semantic relationships between different terms in a document. Such a tool might allow for more accurate, context-sensitive searches (e.g. learning the meaning of “uber for…”). One particular area we wish to explore are LSTM Siamese Neural Networks, which “remember” previous classifications moving forward.
|
losing
|
# Hungr.ai Hungr.ai Hippos
Our HackWestern IV project which makes hungry hungry hippo even cooler.
## Milestones


## Inspiration
Metaphysically speaking, we didn't find the idea, it found us.
## What it does
A multiplayer game which allows player to play against AI(s) (one for each hippo) while controlling the physical hippos in the real world.
## How we built it
Once, we knew we wanted to do something fun with hungry hungry hippos, we acquired the game through Kijiji #CanadianFeels. We deconstructed the game to understand the mechanics of it and made a rough plan to use servos controlled through Raspberry Pi 3 to move the hippos. We decided to keep most of our processing on a laptop to not burden the Pi. Pi served as a end point serving video stream through **web sockets**. **Multithreading** (in Python) allowed us to control each servo/hippo individually through the laptop with Pi always listening for commands. **Flask** framework help us tie in the React.js frontend with Python servers and backends.
## Challenges we ran into
Oh dear god, where do we begin?
* The servos we expected to be delivered to us by Amazon were delayed and due to the weekend we never got them :( Fortunately, we brought almost enough backup servos
* Multithreading in python!!
* Our newly bought PiCamera was busted :( Fortunately, we found someone to lend us their's.
* CNN !!
* Working with Pi without a screen (it doesn't allow you to ssh into it over a public wifi. We had to use ethernet cable to find a workaround)
## Accomplishments that we're proud of
Again, oh dear god, where do we begin?
* The hardware platform (with complementing software backend) looks so elegant and pretty
* The front-end tied the whole thing really well (elegantly simplying the complexity behind the hood)
* The feed from the Picamera to the laptop was so good, much better than we expected.
## What we learned
And again, oh dear god, where do we begin?
* Working with Flask (great if you wanna work with python and js)
* Multithreading in Python
* Working with websockets (and, in general, data transmission between Pi and Laptop over ethernet/network)
## What's next for Hungr.ai
* A Dance Dance Revolution starring our hippo stars (a.k.a Veggie Potamus, Hungry Hippo, Bottomless Potamus and Sweetie Potamus)
* Train our AI/ML models even more/ Try new models
## How do we feel?
In a nutshell, we had a very beneficial spectrum of skills. We believe that the project couldn't have been completed if any of the members weren't present. The learning curve was challenging, but with time and focus, we were able to learn the required skills to carry this project.
|
## Inspiration
This game was inspired by the classical game of connect four, in which one inserts disks into a vertical board to try to get four in a row. As big fans of the game, our team sought to improve it by adding new features.
## What it does
The game is played like a regular game of connect four, except each player may choose to use their turn to rotate the board left or right and let gravity force the pieces to fall downwards. This seemingly innocent change to connect four adds many new layers of strategy and fun to what was already a strategic and fun game.
We developed two products: an iOS app, and a web app, to run the game. In addition, both the iOS and web apps feature the abilities of:
1) Play local "pass and play" multiplayer
2) Play against multiple different AIs we crafted, each of differing skill levels
3) Play live online games against random opponents, including those on different devices!
## How we built it
The iOS app was built in Swift and the web app was written with Javascript's canvas. The bulk of the backend, which is crucial for both our online multiplayer and our AIs, came from Firebase's services.
## Challenges we ran into
None of us are particularly artistic, so getting a visually pleasant UI wasn't exactly easy...
## Accomplishments that we're proud of
We are most proud of our ability to successfully run an online cross-platform multiplayer, which we could not have possibly done without the help of Firebase and its servers and APIs. We are also proud of the AIs we developed, which so far tend to beat us almost every time.
## What we learned
Most of us had very little experience working with backend servers, so Firebase provided us with a lovely introduction to allowing our applications to flourish on my butt.
## What's next for Gravity Four
Let's get Gravity Four onto even more types of devices and into the app store!
|
## Inspiration
The most important part in any quality conversation is knowledge. Knowledge is what ignites conversation and drive - knowledge is the spark that gets people on their feet to take the first step to change. While we live in a time where we are spoiled by the abundance of accessible information, trying to keep up and consume information from a multitude of sources can give you information indigestion: it can be confusing to extract the most relevant points of a new story.
## What it does
Macaron is a service that allows you to keep track of all the relevant events that happen in the world without combing through a long news feed. When a major event happens in the world, news outlets write articles. Articles are aggregated from multiple sources and uses NLP to condense the information, classify the summary into a topic, extracts some keywords, then presents it to the user in a digestible, bite-sized info page.
## How we built it
Macaron also goes through various social media platforms (twitter at the moment) to perform sentiment analysis to see what the public opinion is on the issue: displayed by the sentiment bar on every event card! We used a lot of Google Cloud Platform to help publish our app.
## What we learned
Macaron also finds the most relevant charities for an event (if applicable) and makes donating to it a super simple process. We think that by adding an easy call-to-action button on an article informing you about an event itself, we'll lower the barrier to everyday charity for the busy modern person.
Our front end was built on NextJS, with a neumorphism inspired design incorporating usable and contemporary UI/UX design.
We used the Tweepy library to scrape twitter for tweets relating to an event, then used NLTK's vader to perform sentiment analysis on each tweet to build a ratio of positive to negative tweets surrounding an event.
We also used MonkeyLearn's API to summarize text, extract keywords and classify the aggregated articles into a topic (Health, Society, Sports etc..) The scripts were all written in python.
The process was super challenging as the scope of our project was way bigger than we anticipated! Between getting rate limited by twitter and the script not running fast enough, we did hit a lot of roadbumps and had to make quick decisions to cut out the elements of the project we didn't or couldn't implement in time.
Overall, however, the experience was really rewarding and we had a lot of fun moving fast and breaking stuff in our 24 hours!
|
partial
|
## Inspiration
\*\*\* SEE GITHUB LINK FOR BOTH DEMO VIDEOS (Devpost limits to only one video) \*\*\*
Taking care of plants is surprisingly hard. Let's be honest here - almost everybody we know has lost a plant or two (or many) due to poor maintenance. Appliances and tools within our homes are adopting the newest technologies and becoming increasingly automated - what if we apply these advancements to plant care? How far can we take it?
Our group is also interested in the artistic side of the project. Nature and technology are often at odds with one another; we are constantly reminded of the toll that industrialization has taken on our environment. But is it possible to create something where technology and nature coexist, in a symbiotic relationship?
## What it does
The iPot Lite is, at its core, an autonomous flower pot. The iPot Lite has two main functions: First, it is able to track and follow the location of brighter light sources (such as sun pouring in through a window) in order to give its plant maximum exposure. Secondly, when watering is required, the plant can seek out a fountain location using its camera system and drive over. We envision a system similar to a hand sanitizer dispenser, except filled with water.
## How we built it
The flower pot sits on a mobile base driven by two DC motors along with a caster wheel, which allow it to traverse a wide variety of surfaces. The majority of the prototype is built from wood. The processing is handled by an Arduino Uno (driving, edge detection, light detection) and a Raspberry Pi 4 (image recognition). Photoresistors mounted near the top of the pot allow for tracking light sources. Ultrasonic sensors (downwards and forward-facing) can track walls and the edges of surfaces (such as the side of a coffee table). A camera mounted on the front uses OpenCV to extract, isolate and recognize specific colors, which can be used to mark fountain locations.
## Challenges we ran into
We experienced major issues with camera implementation. The first camera we tried could not be powered by the Pi. The second camera was impractical to use, as it did not have a common interface. The third camera was found to be too slow (in terms of frame rate). The fourth camera was broken. We finally found a fully functional camera on our fifth attempt, after several hours of troubleshooting. Space limitations were also another concern - the footprint of the robot was quite small, and we realized that we didn't have time/resources to properly solder our circuits onto breakout boards. As such, we had to keep a breadboard which took up valuable space. Powering different components was also difficult because of a lack of cables and/or batteries. Finally, fabricating the physical robot was difficult since shop time was limited and our project contains a lot of hardware.
## Accomplishments that we're proud of
The two main functions of the iPot Lite as described are generally functional. We were quite pleased with how the computer vision system turned out, especially given the performance of a Raspberry Pi. We worked extremely hard to create control logic for tracking light sources (using estimated direction of light vectors), and the final implementation was very cool to see. The custom chassis and hardware were all built and integrated within the makeathon time - no earlier design work was performed at all. The final idea was only created on the morning of the event, so we are quite happy to have built a full robot in 24 hours.
## What we learned
Hardware selection is critical towards success. In our case, our choice to use a Raspberry Pi 4 as well as our specific set of chosen sensors greatly influenced the difficulties we faced. Since we chose to use a Raspberry Pi 4 with micro-HDMI output, we were not able to interface with the board without a headless SSH interface. The Raspberry Pi Camera worked remarkably well and without much difficulty after failing to integrate with all of IP webcams, USB Webcams, and the M5 Camera. We initially wanted to use the Huawei Atlas, but found that its power requirements were too high to even consider. We learned that every hardware choice greatly influenced the difficulties faced towards project completion, as well as the choice of other hardware components. As a result, integration and communication is incredibly important to discover incompatibilities where they exist in the initial designs, and the most successful strategy is to integrate early and iterate across all features simultaneously.
## What's next for iPot Lite
We really wanted to add wireless charging or solar panels for total self-sustainability, but unfortunately due to the limited parts selection we were unable to do so. We also wanted to create additional tools for things like data logging, reviewing and settings configuration. A mobile companion app would have been nice, but we didn't have time to implement it (could be used for tracking plant health). We toyed with additional ideas such as a security camera mode using motion detection, or neural-net based plant identification. Unfortunately due to time constraints we weren't able to fully explore these software features, but we were glad that we finished most of the hardware.
|
## Inspiration
Inspired by [SIU Carbondale's Green Roof](https://greenroof.siu.edu/siu-green-roof/), we wanted to create an automated garden watering system that would help address issues ranging from food deserts to lack of agricultural space to storm water runoff.
## What it does
This hardware solution takes in moisture data from soil and determines whether or not the plant needs to be watered. If the soil's moisture is too low, the valve will dispense water and the web server will display that water has been dispensed.
## How we built it
First, we tested the sensor and determined the boundaries between dry, damp, and wet based on the sensor's output values. Then, we took the boundaries and divided them by percentage soil moisture. Specifically, the sensor that measures the conductivity of the material around it, so water being the most conductive had the highest values and air being the least conductive had the lowest. Soil would fall in the middle and the ranges in moisture were defined by the pure air and pure water boundaries.
From there, we visualized the hardware set up with the sensor connected to an Arduino UNO microcontroller connected to a Raspberry Pi 4 controlling a solenoid valve that releases water when the soil moisture meter is less than 40% wet.
## Challenges we ran into
At first, we aimed too high. We wanted to incorporate weather data into our water dispensing system, but the information flow and JSON parsing were not cooperating with the Arduino IDE. We consulted with a mentor, Andre Abtahi, who helped us get a better perspective of our project scope. It was difficult to focus on what it meant to truly create a minimum viable product when we had so many ideas.
## Accomplishments that we're proud of
Even though our team is spread across the country (California, Washington, and Illinois), we were still able to create a functioning hardware hack. In addition, as beginners we are very excited of this hackathon's outcome.
## What we learned
We learned about wrangling APIs, how to work in a virtual hackathon, and project management. Upon reflection, creating a balance between feasibility, ability, and optimism is important for guiding the focus and motivations of a team. Being mindful about energy levels is especially important for long sprints like hackathons.
## What's next for Water Smarter
Lots of things! What's next for Water Smarter is weather controlled water dispensing. Given humidity, precipitation, and other weather data, our water system will dispense more or less water. This adaptive water feature will save water and let nature pick up the slack. We would use the OpenWeatherMap API to gather the forecasted volume of rain, predict the potential soil moisture, and have the watering system dispense an adjusted amount of water to maintain correct soil moisture content.
In a future iteration of Water Smarter, we want to stretch the use of live geographic data even further by suggesting appropriate produce for which growing zone in the US, which will personalize the water conservation process. Not all plants are appropriate for all locations so we would want to provide the user with options for optimal planting. We can use software like ArcScene to look at sunlight exposure according to regional 3D topographical images and suggest planting locations and times.
We want our product to be user friendly, so we want to improve our aesthetics and show more information about soil moisture beyond just notifying water dispensing.
|
## Inspiration
It’s insane how the majority of the U.S. still looks like **endless freeways and suburban sprawl.** The majority of Americans can’t get a cup of coffee or go to the grocery store without a car.
What would America look like with cleaner air, walkable cities, green spaces, and effective routing from place to place that builds on infrastructure for active transport and micro mobility? This is the question we answer, and show, to urban planners, at an extremely granular street level.
Compared to most of Europe and Asia (where there are public transportation options, human-scale streets, and dense neighborhoods) the United States is light-years away ... but urban planners don’t have the tools right now to easily assess a specific area’s walkability.
**Here's why this is an urgent problem:** Current tools for urban planners don’t provide *location-specific information* —they only provide arbitrary, high-level data overviews over a 50-mile or so radius about population density. Even though consumers see new tools, like Google Maps’ area busyness bars, it is only bits and pieces of all the data an urban planner needs, like data on bike paths, bus stops, and the relationships between traffic and pedestrians.
As a result, there’s very few actionable improvements that urban planners can make. Moreover, because cities are physical spaces, planners cannot easily visualize what an improvement (e.g. adding a bike/bus lane, parks, open spaces) would look like in the existing context of that specific road. Many urban planners don’t have the resources or capability to fully immerse themselves in a new city, live like a resident, and understand the problems that residents face on a daily basis that prevent them from accessing public transport, active commutes, or even safe outdoor spaces.
There’s also been a significant rise in micro-mobility—usage of e-bikes (e.g. CityBike rental services) and scooters (especially on college campuses) are growing. Research studies have shown that access to public transport, safe walking areas, and micro mobility all contribute to greater access of opportunity and successful mixed income neighborhoods, which can raise entire generations out of poverty. To continue this movement and translate this into economic mobility, we have to ensure urban developers are **making space for car-alternatives** in their city planning. This means bike lanes, bus stops, plaza’s, well-lit sidewalks, and green space in the city.
These reasons are why our team created CityGO—a tool that helps urban planners understand their region’s walkability scores down to the **granular street intersection level** and **instantly visualize what a street would look like if it was actually walkable** using Open AI's CLIP and DALL-E image generation tools (e.g. “What would the street in front of Painted Ladies look like if there were 2 bike lanes installed?)”
We are extremely intentional about the unseen effects of walkability on social structures, the environments, and public health and we are ecstatic to see the results: 1.Car-alternatives provide economic mobility as they give Americans alternatives to purchasing and maintaining cars that are cumbersome, unreliable, and extremely costly to maintain in dense urban areas. Having lower upfront costs also enables handicapped people, people that can’t drive, and extremely young/old people to have the same access to opportunity and continue living high quality lives. This disproportionately benefits people in poverty, as children with access to public transport or farther walking routes also gain access to better education, food sources, and can meet friends/share the resources of other neighborhoods which can have the **huge** impact of pulling communities out of poverty.
Placing bicycle lanes and barriers that protect cyclists from side traffic will encourage people to utilize micro mobility and active transport options. This is not possible if urban planners don’t know where existing transport is or even recognize the outsized impact of increased bike lanes.
Finally, it’s no surprise that transportation as a sector alone leads to 27% of carbon emissions (US EPA) and is a massive safety issue that all citizens face everyday. Our country’s dependence on cars has been leading to deeper issues that affect basic safety, climate change, and economic mobility. The faster that we take steps to mitigate this dependence, the more sustainable our lifestyles and Earth can be.
## What it does
TLDR:
1) Map that pulls together data on car traffic and congestion, pedestrian foot traffic, and bike parking opportunities. Heat maps that represent the density of foot traffic and location-specific interactive markers.
2) Google Map Street View API enables urban planners to see and move through live imagery of their site.
3) OpenAI CLIP and DALL-E are used to incorporate an uploaded image (taken from StreetView) and descriptor text embeddings to accurately provide a **hyper location-specific augmented image**.
The exact street venues that are unwalkable in a city are extremely difficult to pinpoint. There’s an insane amount of data that you have to consolidate to get a cohesive image of a city’s walkability state at every point—from car traffic congestion, pedestrian foot traffic, bike parking, and more.
Because cohesive data collection is extremely important to produce a well-nuanced understanding of a place’s walkability, our team incorporated a mix of geoJSON data formats and vector tiles (specific to MapBox API). There was a significant amount of unexpected “data wrangling” that came from this project since multiple formats from various sources had to be integrated with existing mapping software—however, it was a great exposure to real issues data analysts and urban planners have when trying to work with data.
There are three primary layers to our mapping software: traffic congestion, pedestrian traffic, and bicycle parking.
In order to get the exact traffic congestion per street, avenue, and boulevard in San Francisco, we utilized a data layer in MapBox API. We specified all possible locations within SF and made requests for geoJSON data that is represented through each Marker. Green stands for low congestion, yellow stands for average congestion, and red stands for high congestion. This data layer was classified as a vector tile in MapBox API.
Consolidating pedestrian foot traffic data was an interesting task to handle since this data is heavily locked in enterprise software tools. There are existing open source data sets that are posted by regional governments, but none of them are specific enough that they can produce 20+ or so heat maps of high foot traffic areas in a 15 mile radius. Thus, we utilized Best Time API to index for a diverse range of locations (e.g. restaurants, bars, activities, tourist spots, etc.) so our heat maps would not be biased towards a certain style of venue to capture information relevant to all audiences. We then cross-validated that data with Walk Score (the most trusted site for gaining walkability scores on specific addresses). We then ranked these areas and rendered heat maps on MapBox to showcase density.
San Francisco’s government open sources extremely useful data on all of the locations for bike parking installed in the past few years. We ensured that the data has been well maintained and preserved its quality over the past few years so we don’t over/underrepresent certain areas more than others. This was enforced by recent updates in the past 2 months that deemed the data accurate, so we added the geographic data as a new layer on our app. Each bike parking spot installed by the SF government is represented by a little bike icon on the map!
**The most valuable feature** is the user can navigate to any location and prompt CityGo to produce a hyper realistic augmented image resembling that location with added infrastructure improvements to make the area more walkable. Seeing the StreetView of that location, which you can move around and see real time information, and being able to envision the end product is the final bridge to an urban developer’s planning process, ensuring that walkability is within our near future.
## How we built it
We utilized the React framework to organize our project’s state variables, components, and state transfer. We also used it to build custom components, like the one that conditionally renders a live panoramic street view of the given location or render information retrieved from various data entry points.
To create the map on the left, our team used MapBox’s API to style the map and integrate the heat map visualizations with existing data sources. In order to create the markers that corresponded to specific geometric coordinates, we utilized Mapbox GL JS (their specific Javascript library) and third-party React libraries.
To create the Google Maps Panoramic Street View, we integrated our backend geometric coordinates to Google Maps’ API so there could be an individual rendering of each location. We supplemented this with third party React libraries for better error handling, feasibility, and visual appeal. The panoramic street view was extremely important for us to include this because urban planners need context on spatial configurations to develop designs that integrate well into the existing communities.
We created a custom function and used the required HTTP route (in PHP) to grab data from the Walk Score API with our JSON server so it could provide specific Walkability Scores for every marker in our map.
Text Generation from OpenAI’s text completion API was used to produce location-specific suggestions on walkability. Whatever marker a user clicked, the address was plugged in as a variable to a prompt that lists out 5 suggestions that are specific to that place within a 500-feet radius. This process opened us to the difficulties and rewarding aspects of prompt engineering, enabling us to get more actionable and location-specific than the generic alternative.
Additionally, we give the user the option to generate a potential view of the area with optimal walkability conditions using a variety of OpenAI models. We have created our own API using the Flask API development framework for Google Street View analysis and optimal scene generation.
**Here’s how we were able to get true image generation to work:** When the user prompts the website for a more walkable version of the current location, we grab an image of the Google Street View and implement our own architecture using OpenAI contrastive language-image pre-training (CLIP) image and text encoders to encode both the image and a variety of potential descriptions describing the traffic, use of public transport, and pedestrian walkways present within the image. The outputted embeddings for both the image and the bodies of text were then compared with each other using scaled cosine similarity to output similarity scores. We then tag the image with the necessary descriptors, like classifiers,—this is our way of making the system understand the semantic meaning behind the image and prompt potential changes based on very specific street views (e.g. the Painted Ladies in San Francisco might have a high walkability score via the Walk Score API, but could potentially need larger sidewalks to further improve transport and the ability to travel in that region of SF). This is significantly more accurate than simply using DALL-E’s set image generation parameters with an unspecific prompt based purely on the walkability score because we are incorporating both the uploaded image for context and descriptor text embeddings to accurately provide a hyper location-specific augmented image.
A descriptive prompt is constructed from this semantic image analysis and fed into DALLE, a diffusion based image generation model conditioned on textual descriptors. The resulting images are higher quality, as they preserve structural integrity to resemble the real world, and effectively implement the necessary changes to make specific locations optimal for travel.
We used Tailwind CSS to style our components.
## Challenges we ran into
There were existing data bottlenecks, especially with getting accurate, granular, pedestrian foot traffic data.
The main challenge we ran into was integrating the necessary Open AI models + API routes. Creating a fast, seamless pipeline that provided the user with as much mobility and autonomy as possible required that we make use of not just the Walk Score API, but also map and geographical information from maps + google street view.
Processing both image and textual information pushed us to explore using the CLIP pre-trained text and image encoders to create semantically rich embeddings which can be used to relate ideas and objects present within the image to textual descriptions.
## Accomplishments that we're proud of
We could have done just normal image generation but we were able to detect car, people, and public transit concentration existing in an image, assign that to a numerical score, and then match that with a hyper-specific prompt that was generating an image based off of that information. This enabled us to make our own metrics for a given scene; we wonder how this model can be used in the real world to speed up or completely automate the data collection pipeline for local governments.
## What we learned and what's next for CityGO
Utilizing multiple data formats and sources to cohesively show up in the map + provide accurate suggestions for walkability improvement was important to us because data is the backbone for this idea. Properly processing the right pieces of data at the right step in the system process and presenting the proper results to the user was of utmost importance. We definitely learned a lot about keeping data lightweight, easily transferring between third-party softwares, and finding relationships between different types of data to synthesize a proper output.
We also learned quite a bit by implementing Open AI’s CLIP image and text encoders for semantic tagging of images with specific textual descriptions describing car, public transit, and people/people crosswalk concentrations. It was important for us to plan out a system architecture that effectively utilized advanced technologies for a seamless end-to-end pipeline. We learned about how information abstraction (i.e. converting between images and text and finding relationships between them via embeddings) can play to our advantage and utilizing different artificially intelligent models for intermediate processing.
In the future, we plan on integrating a better visualization tool to produce more realistic renders and introduce an inpainting feature so that users have the freedom to select a specific view on street view and be given recommendations + implement very specific changes incrementally. We hope that this will allow urban planners to more effectively implement design changes to urban spaces by receiving an immediate visual + seeing how a specific change seamlessly integrates with the rest of the environment.
Additionally we hope to do a neural radiance field (NERF) integration with the produced “optimal” scenes to give the user the freedom to navigate through the environment within the NERF to visualize the change (e.g. adding a bike lane or expanding a sidewalk or shifting the build site for a building). A potential virtual reality platform would provide an immersive experience for urban planners to effectively receive AI-powered layout recommendations and instantly visualize them.
Our ultimate goal is to integrate an asset library and use NERF-based 3D asset generation to allow planners to generate realistic and interactive 3D renders of locations with AI-assisted changes to improve walkability. One end-to-end pipeline for visualizing an area, identifying potential changes, visualizing said changes using image generation + 3D scene editing/construction, and quickly iterating through different design cycles to create an optimal solution for a specific locations’ walkability as efficiently as possible!
|
partial
|
## Inspiration
We are students from Vanderbilt University, Nashville, TN.
**Nashville, aka music city,** is also one of the most dangerous cities in the US. It ranks #15 last year for its crime rate. Violent Crime Rate = 1,101/per 100,000 people. 10 days ago, we went to a hookah bar very close to Vanderbilt, we did not stay there for very long but two security guards got shot at that bar the very same night. When we looked it up, we found that there had been previous instances of similar incidents happening in the same area and we had no idea about it.
This freaked us out. How could we enjoy the city safely? How could we avoid the unsafe areas?
The fact is, a lot of cities in the US are not safe to walk. How do we solve this problem?
As data science students, we wanted to find a data-driven solution that can help people know when they are walking through an area with violent crime.
This year, our theme is connecting the dots. When we think about **connecting the dots**, daily navigation come to our mind. Is it possible to use the crime event data to build a navigation application and help people know if they are about to be walking through a particularly unsafe area?
## What it does
This application will show you areas with violent crimes and help you navigate around them. It can also help you know if you are in a particularly crime-heavy area.
A lot of useful information such as Vanderbilt University Police Department contact info and nearest hospital to your location are provided as well.
## How we built it
1. We created data pipelines using python to extract Nashville open crime incident data, clean the data and add the features.
2. We use google maps api to help us with the navigation part and we push it to the flask.
3. We use Flutter to develop the mobile application.
## Challenges we ran into
1. No experience in phone app development.
2. No experience with using google maps api.
## Accomplishments that I'm proud of
Finishing the application to a decent extent.
## What I learned
Phone app development and developing pipelines between different components of a software.
## What's next for Walkbuddy
1. Route optimization calculation
2. Live location
3. Location Sharing
4. Safety scores of different paths
### References:
[Two Security Guards Got Shot At Nashville Hoookah Bar](https://www.wsmv.com/news/two-security-guards-shot-at-nashville-hookah-bar-overnight/article_0150204a-edcf-11e9-bffe-5fa2003f5895.html)
[World's Most Dangerous Cities](https://www.worldatlas.com/articles/most-dangerous-cities-in-the-united-states.html)
|
Inspiration:
The recent surge in criminal activity on university grounds inspired our team to develop a solution to tackle student concerns. We aimed to empower students with a tool that would allow them to quickly request for assistance from peers in case of an emergency, fostering a safer atmosphere on campus.
What it does:
Our app, SafeSpace, equips users with an emergency button, which when pressed, sends out an SOS to other nearby users to request immediate assistance. In these life-or-death emergency situations, a quick response time is crucial in preventing escalation of the situation. Rather than having to wait for the arrival of 911 services or campus police, students in danger can get assistance promptly from nearby peers. The alert pins the location of the vulnerable student on a map, allowing other students to easily locate and respond to the threat. In addition, a future revision will allow students to see heat maps of criminal occurrences on campus, helping them avoid dangerous areas.
How we built it:
We employed Google's Flutter development kit to create the application. Flutter provides developers with a variety of libraries, including the Firebase Messaging API, which we used to send push notifications. When the user launches the app for the first time, we make a POST request to the back-end server and store the device's token. If that user ever uses the button, we identify their location and we make another POST request to the server, providing data that can be used to locate that user. We then make an API call to the firebase messaging API and send a push notification to all nearby devices. These notifications will contain the location, which we then use to pin the location on a map (provided by the Google Maps API). The back-end was developed using Python and a Flask server to handle the POST requests and store the data.
Challenges we faced
One of our greatest hurdles was the integration of the Firebase Cloud Messaging API with both our front and back ends. Firstly, sending push notifications to devices through the Firebase API was initially very sluggish, taking up to minutes to send. This rendered the app virtually useless, as users would not be able to get help quickly enough. Thus, we explored alternative means of sending HTTP requests to Firebase from Flutter and ultimately found a way to send alerts instantaneously. Next, ensuring that Flask & Python were well integrated with Firebase was another big hurdle. Despite using the default python requests library correctly and receiving successful codes, Firebase would simply refuse to send push notifications to the Flutter app. After meticulous debugging and exploring, we finally discovered and implemented the firebase\_admin library which solved the issue completely.
Accomplishments that we're proud of:
We are proud of our implementation of the Google Maps API to provide users with a pinned live location of the student in danger. Requesting students will now be provided with aid quickly and intuitively. This was a difficult and tedious task to undertake. Nevertheless, as one of our team values, we prioritized user safety and usability over all else. All in all, we take the greatest amount of pride in having developed a functional and impactful app that directly addresses the safety concerns of students. By fostering a peer-to-peer support system to fight crimes, we promote security and a sense of community on campus.
What's next for SafeSpace:
Firstly, we plan to add a heat map to the app, which highlights the locations of past criminal occurrences. This will deter users from entering a dangerous area of the campus and give valuable insight to campus police, who can further investigate and improve these dangerous hotspots. Indeed, we will work with university authorities and campus security to add more features curated to their respective campus locations.
Moreover, in addition to providing users with the victim's latitudinal and longitudinal positions, we will add altitude to better describe their location in buildings, where it may be difficult to discern which floor the victim is on.
|
## Inspiration
The vicarious experiences of friends, and some of our own, immediately made clear the potential benefit to public safety the City of London’s dataset provides. We felt inspired to use our skills to make more accessible, this data, to improve confidence for those travelling alone at night.
## What it does
By factoring in the location of street lights, and greater presence of traffic, safeWalk intuitively presents the safest options for reaching your destination within the City of London. Guiding people along routes where they will avoid unlit areas, and are likely to walk beside other well-meaning citizens, the application can instill confidence for travellers and positively impact public safety.
## How we built it
There were three main tasks in our build.
1) Frontend:
Chosen for its flexibility and API availability, we used ReactJS to create a mobile-to-desktop scaling UI. Making heavy use of the available customization and data presentation in the Google Maps API, we were able to achieve a cohesive colour theme, and clearly present ideal routes and streetlight density.
2) Backend:
We used Flask with Python to create a backend that we used as a proxy for connecting to the Google Maps Direction API and ranking the safety of each route. This was done because we had more experience as a team with Python and we believed the Data Processing would be easier with Python.
3) Data Processing:
After querying the appropriate dataset from London Open Data, we had to create an algorithm to determine the “safest” route based on streetlight density. This was done by partitioning each route into subsections, determining a suitable geofence for each subsection, and then storing each lights in the geofence. Then, we determine the total number of lights per km to calculate an approximate safety rating.
## Challenges we ran into:
1) Frontend/Backend Connection:
Connecting the frontend and backend of our project together via RESTful API was a challenge. It took some time because we had no experience with using CORS with a Flask API.
2) React Framework
None of the team members had experience in React, and only limited experience in JavaScript. Every feature implementation took a great deal of trial and error as we learned the framework, and developed the tools to tackle front-end development. Once concepts were learned however, it was very simple to refine.
3) Data Processing Algorithms
It took some time to develop an algorithm that could handle our edge cases appropriately. At first, we thought we could develop a graph with weighted edges to determine the safest path. Edge cases such as handling intersections properly and considering lights on either side of the road led us to dismissing the graph approach.
## Accomplishments that we are proud of
Throughout our experience at Hack Western, although we encountered challenges, through dedication and perseverance we made multiple accomplishments. As a whole, the team was proud of the technical skills developed when learning to deal with the React Framework, data analysis, and web development. In addition, the levels of teamwork, organization, and enjoyment/team spirit reached in order to complete the project in a timely manner were great achievements
From the perspective of the hack developed, and the limited knowledge of the React Framework, we were proud of the sleek UI design that we created. In addition, the overall system design lent itself well towards algorithm protection and process off-loading when utilizing a separate back-end and front-end.
Overall, although a challenging experience, the hackathon allowed the team to reach accomplishments of new heights.
## What we learned
For this project, we learned a lot more about React as a framework and how to leverage it to make a functional UI. Furthermore, we refined our web-based design skills by building both a frontend and backend while also use external APIs.
## What's next for safewalk.io
In the future, we would like to be able to add more safety factors to safewalk.io. We foresee factors such as:
Crime rate
Pedestrian Accident rate
Traffic density
Road type
|
losing
|
## Inspiration
We are a team of engineering science students with backgrounds in mathematics, physics and computer science. A common passion for the implementation of mathematical methods in innovative computing contexts and the application of these technologies to physical phenomena motivated us to create this project [Parallel Fourier Computing].
## What it does
Our project is a Discrete Fourier Transform [DFT] algorithm implemented in JavaScript for sinusoid spectral decomposition with explicit support for parallel computing task distribution. This algorithm is called by a web page front-end that allows a user to program the frequency/periodicity of a sum of three sinusoids, see this function on a graphical figure, and to calculate and display the resultant DFT for this sinusoid. The program successfully identifies the constituent fundamental frequencies of a sum of three sinusoids by use of this DFT.
## How We built it
This project was built in parallel, with some team members working on DCL integration, web page front ends and algorithm writing. The DFT algorithm used was initially prototyped in Python before being ported over to JavaScript for integration with the DCL network. We tested the function of our algorithm from a wide range of frequencies and sampling rates within the human spectrum of hearing. All team members contributed to component integration towards the end of the project, ensuring compliance with the DCL method of task distribution.
## Challenges We ran into
Though our team has an educational background in Fourier analysis, we were unfamiliar with the workflows and utilities of parallel computing systems. We were principly concerned with (1) how we can fundamentally divide the job of computing a Discrete Fourier Transform into a set of sequentially uncoupled tasks for parallel processing, and (2) how we implement such an algorithm design in the JavaScript foundation that DCL relies on. Initially, our team struggled to define clearly independent computing tasks that we could offload to parallel processing units to speed up our algorithm. We overcame this challenge when we realized that we could produce analytic functions for any partial sum term in our series and pass these exact functions off for processing in parallel. One challenge we faced when adapting our code to the task distribution method of the DCL system was writing a work function that was entirely self-contained without a dependence on external libraries or extraneously long procedural logic. To avoid library dependency, we wrote our own procedural logic to handle the complex number arithmetic that's needed for a Discrete Fourier Transform.
## Accomplishments that We're proud of
Our team successfully wrote a Discrete Fourier Transform algorithm designed for parallel computing uses. We encoded custom complex number arithmetic operations into a self-contained JavaScript function. We have integrated our algorithm with the DCL task scheduler and built a web page front end with interactive controls to program sinusoid functions and to graph these functions and their Discrete Fourier Transforms. Our algorithm can successfully decompose a sum of sinusoids into its constituent frequency components.
## What We learned
Our team learned about some of the constraints that task distribution in a parallel computing network can have on the procedural logic used in task definitions. Not having access to external JavaScript libraries, for example, required custom encoding of complex number arithmetic operations needed to compute DFT terms. Our team also learned more about how DFTs can be used to decompose musical chords into its fundamental pitches.
## What's next for Parallel Fourier Computing
Next steps for our project in the back-end are to optimize the algorithm to decrease the computation time. On the front-end we would like to increase the utility of the application by allowing the user to play a note and have the algorithm determine the pitches used in making the note.
#### Domain.com submission
Our domain name is <http://parallelfouriercomputing.tech/>
#### Team Information
Team 3: Jordan Curnew, Benjamin Beggs, Philip Basaric
|
## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces.
|
# TextMemoirs
## nwHacks 2023 Submission by Victor Parangue, Kareem El-Wishahy, Parmvir Shergill, and Daniel Lee
Journalling is a well-established practice that has been shown to have many benefits for mental health and overall well-being. Some of the main benefits of journalling include the ability to reflect on one's thoughts and emotions, reduce stress and anxiety, and document progress and growth. By writing down our experiences and feelings, we are able to process and understand them better, which can lead to greater self-awareness and insight. We can track our personal development, and identify the patterns and triggers that may be contributing to our stress and anxiety. Journalling is a practice that everyone can benefit from.
Text Memoirs is designed to make the benefits of journalling easy and accessible to everyone. By using a mobile text-message based journaling format, users can document their thoughts and feelings in a real-time sequential journal, as they go about their day.
Simply text your assigned number, and your journal text entry gets saved to our database. You're journal text entries are then displayed on our web app GUI. You can view all your text journal entries on any given specific day on the GUI.
You can also text commands to your assigned number using /EDIT and /DELETE to update your text journal entries on the database and the GUI (see the image gallery).
Text Memoirs utilizes Twilio’s API to receive and store user’s text messages in a CockroachDB database. The frontend interface for viewing a user’s daily journals is built using Flutter.
![]()
![]()
# TextMemoirs API
This API allows you to insert users, get all users, add texts, get texts by user and day, delete texts by id, get all texts and edit texts by id.
## Endpoints
### Insert User
Insert a user into the system.
* Method: **POST**
* URL: `/insertUser`
* Body:
`{
"phoneNumber": "+17707626118",
"userName": "Test User",
"password": "Test Password"
}`
### Get Users
Get all users in the system.
* Method: **GET**
* URL: `/getUsers`
### Add Text
Add a text to the system for a specific user.
* Method: **POST**
* URL: `/addText`
* Body:
`{
"phoneNumber": "+17707626118", "textMessage": "Text message #3", "creationDate": "1/21/2023", "creationTime": "2:57:14 PM"
}`
### Get Texts By User And Day
Get all texts for a specific user and day.
* Method: **GET**
* URL: `/getTextsByUserAndDay`
* Parameters:
+ phoneNumber: The phone number of the user.
+ creationDate: The date of the texts in the format `MM/DD/YYYY`.
### Delete Texts By ID
Delete a specific text by ID.
* Method: **DELETE**
* URL: `/deleteTextsById`
* Body:
`{
"textId": 3
}`
### Edit Texts By ID
Edit a specific text by ID.
* Method: **PUT**
* URL: `/editTextsById`
* Parameters:
+ id: The ID of the text to edit.
* Body:
`{
"textId": 2,
"textMessage": "Updated text message"
}`
### Get All Texts
Get all texts in the database.
* Method: **GET**
* URL: `/getAllTexts`
|
winning
|
## Inspiration
Our project was inspired by the everyday challenges patients face, from remembering to take their medication to feeling isolated in their health journey. We saw the need for a solution that could do more than just manage symptoms—it needed to support patients emotionally, help prevent medication mistakes, and foster a sense of community. By using AI and creating a space where patients can connect with others in similar situations, we aim to improve not only their health outcomes but also their overall well-being.
## What it does
Our project helps patients stay on track with their medication by using Apollo, our assistant that reminds them and tracks how they're feeling. It keeps a journal of their mood, sentiment, and actions, which can be shared with healthcare providers for better diagnosis and treatment. Users can also connect with others going through similar challenges, forming a supportive community. Beyond that, the platform helps prevent errors with prescriptions and medication, answers questions about their meds, and encourages patients to take an active role in their care—leading to more accurate diagnoses and reducing their financial burden.
## How we built it
We built multiple components so that everyone could benefit from our voice assistant system. Our voice assistant, Apollo, reads the user's transcript using OCR and then stores it in a DB for future retrieval. The voice assistant then understands the user and talks to them so that it can obtain information while consoling them. We achieved this by building a sophisticated pipeline involving an STT, text processing, and TTS layer.
After the conversation is done, notes are made from the transcript and summarized using our LLM agents, which are then again stored in the database. Artemis helps the user connect with other individuals who have gone through similar problems by using a sophisticated RAG pipeline utilizing LangChain.
Our Emergency Pipeline understands the user's problem by using a voice channel powered by React Native, evaluates the issue, and answers it by using another RAG-centric approach. Finally, for each interaction, a sentiment analysis is done using the RoBERTa Large Model, maintaining records of the patient's behavior, activities, mood, etc., in an encrypted manner for future reference by both the user and their associated practitioners.
To make our system accessible to users, we developed both a React web application and a React Native mobile app . The web app provides a comprehensive interface for users to interact with our voice assistant system from their desktop browsers, offering full functionality and easy access to conversation history, summaries, and connections made through Artemis.
The React Native mobile app for Emergencies brings the power of our voice assistant to users' smartphones, allowing them to seek help easily in case of an emergency
## Challenges we ran into
One of the key challenges we faced was ensuring the usability of the system. We wanted to create an intuitive experience that would be easy for users to navigate, especially during moments of mental distress. Designing a UI that is both simple and effective was difficult, as we had to strike the right balance between offering powerful features and avoiding overwhelming the user with too much information or complexity.
## Accomplishments that we're proud of
One of the biggest accomplishments we’re proud of is how accessible and user-friendly our project is. We’ve built an AI-powered platform that makes managing health easier for everyone, including those who may not have easy access to healthcare. By integrating features like medication reminders, mood and sentiment tracking, and a supportive community, we’ve created a tool that’s inclusive and intuitive. Our platform bridges the gap for those who may struggle with traditional healthcare systems, offering real-time answers to medication questions and preventing errors, all while fostering patient engagement. This combination of accessibility and smart features empowers users to take control of their health in a meaningful way, ensuring patient safety.
## What we learned
Throughout this project, we gained valuable experience working with new APIs that we had never used before, which expanded our technical skills and allowed us to implement features more effectively. We also learned how to better manage project progress by setting clear goals, collaborating efficiently, and adapting to challenges as they arose. This taught us the importance of flexibility and communication within the team, helping us stay on track and deliver a functional product within the tight timeframe of the hackathon.
## What's next for SafeSpace
In the future, we plan to enhance the platform with a strong focus on patient safety by integrating a feature that checks drug interactions when a prescription is provided by a doctor, ensuring the well-being of patients and preventing harmful combinations. Additionally, we aim to implement anti-hallucination measures to prevent false diagnoses, and safeguarding the accuracy of the assistant’s recommendations and promoting patient safety. To further protect users, we will incorporate robust encryption techniques to securely manage and store sensitive data, ensuring the highest level of privacy and security for patient information.
|
## Inspiration
Our team really considered the theme of **Connectivity**, and what it feels like to be connected. That got us to thinking about games we used to be able to play that involved contact such as Tag, Assassin, etc. We decided to see if we could create an **upgraded** spin on these games that would be timelessly fun, yet could also adhere to modern social-distancing guidelines.
## What it does
CameraShy is a free-for-all game, where the objective of the player is to travel within the designated geo-field, looking for other players, yet hiding from them as well. When they find a player, their goal is to snap a picture of them within the app, which acts as a "tagging" mechanism. The image is then compared against images of the players' faces, and if it is a match then the player who took the image gains a point, and the unsuspecting victim is eliminated from the competition. The last player standing, wins. Players themselves can create an arena and customize the location, size, game length, and player limit, sending a unique code to their friends to join.
## How we built it
CameraShy is separated into two main portions - the application itself, and the backend database.
### **Application**
We used both the Swift language, and SwiftUI language in building the application frontend. This includes all UI and UX. The application handles the creation of games, joining of games, taking of pictures, location handling, notification receiving, and any other data being sent to it or that needs to be sent to the backend. To authenticate users and ensure privacy, we utilized Apple's *Sign in With Apple* , which anonymizes the users' information, only giving us an email, which may be masked by Apple based on the User's choice.
### **Server**
We used MongoDB for our database with Node as our backend language. With it, we centralized our ongoing games, sent updates on player locations, arena location, arena boundary, time left, players list, and much more. When a user creates an account, their image is stored on the database with a unique identifier. During a game, when an image of a player is uploaded it is quickly put through Azure's Facial Recognition API, using the previously uploaded player images as reference to identify who was in the shot, if anyone was. We are proud to say that this also works with mask wearers. Finally, the server sends notifications to devices based on if they won/lost/left the arena and forfeited the game.
## Challenges we ran into
Taking on a decently sized project like this, we were bound to run into challenges (especially with 3/4 of us being first-time Hackers!). Here are a few of the challenges we ran into:
### 1. HTTPS Confirmation
We had issues with our database which set us a few hours back, but we found a way around it all pitched in (frontend devs as well) to figure out a solution as to why our database would not register with an HTTPS certificate.
### 2. Different Swift Languages
While both Swift and SwiftUI are unique languages written by Apple, for Apple devices, they are very different in nature. Swift relies on Storyboarding and is mostly imperative, whereas SwiftUI utilizes a different approach, and is declarative. With one front-end developer utilizing Swift, and the other SwiftUI, it was difficult to merge Views and connect features properly, but together we learnt a bit of the others' language in the process.
### 3. Facial Recognition with Masks
As anyone with a device that utilizes Facial Identification might know, Facial Recognition with a mask on can be difficult. We underwent numerous tests to figure out if it was even possible to utilize facial recognition with a mask, and figured out workarounds in order to do so properly.
## Accomplishments that we're proud of
One accomplishments we're proud of is being able to utilize multiple endpoints and APIs together seamlessly. At the beginning we were wary of dealing with so much data (geographical location, player counts, time, notification IDs, apple unique identifiers, images, facial recognition, and more!), but looking back from where we are now, we are glad we took the risk, as our product is so much better as as a result.
Another noteworthy accomplishment is our fluid transitions between SwiftUI and Swift. As previously mentioned, this was not a simple task, and we're very happy with how the product turned out.
## What we learned
As we overcame our challenges and embarked on our first Hackathon, the most important thing we learnt was that working as a team on a project does not necessarily mean each person has their own role. In many cases, we had to work together to help each other out, thinking of ideas for areas that were not our expertise. As a result, we were able to learn new tools and ways of doing things, as well as dabble in different coding languages.
## What's next for CameraShy - The World is Your Playground
Our next steps for CameraShy is to embrace user and game customizability. We would like to create a user-oriented settings view that allows them to clear their data off our servers themselves, reset their accounts, and more.
In terms of game customizability, what we have now is just the beginning. We have a long list of potential features including geographic neutral zones and bigger game arenas. Most importantly for us, however, is to continue fine-tuning what we've built so far before we go ahead and implement something new.
|
## Inspiration
When attending crowded lectures or tutorials, it's fairly difficult to discern content from other ambient noise. What if a streamlined pipeline existed to isolate and amplify vocal audio while transcribing text from audio, and providing general context in the form of images? Is there any way to use this technology to minimize access to education? These were the questions we asked ourselves when deciding the scope of our project.
## The Stack
**Front-end :** react-native
**Back-end :** python, flask, sqlalchemy, sqlite
**AI + Pipelining Tech :** OpenAI, google-speech-recognition, scipy, asteroid, RNNs, NLP
## What it does
We built a mobile app which allows users to record video and applies an AI-powered audio processing pipeline.
**Primary use case:** Hard of hearing aid which:
1. Isolates + amplifies sound from a recorded video (pre-trained RNN model)
2. Transcribes text from isolated audio (google-speech-recognition)
3. Generates NLP context from transcription (NLP model)
4. Generates an associated image to topic being discussed (OpenAI API)
## How we built it
* Frameworked UI on Figma
* Started building UI using react-native
* Researched models for implementation
* Implemented neural networks and APIs
* Testing, Testing, Testing
## Challenges we ran into
Choosing the optimal model for each processing step required careful planning. Algorithim design was also important as responses had to be sent back to the mobile device as fast as possible to improve app usability.
## Accomplishments that we're proud of + What we learned
* Very high accuracy achieved for transcription, NLP context, and .wav isolation
* Efficient UI development
* Effective use of each tem member's strengths
## What's next for murmr
* Improve AI pipeline processing, modifying algorithms to decreasing computation time
* Include multi-processing to return content faster
* Integrate user-interviews to improve usability + generally focus more on usability
|
losing
|
## Inspiration
We created AR World to make AR more accessible for everyone. Through the mobile app, any static image in the real world is instantly replaced by a video. Through the web app, users can upload any image and video pair to be replaced in real-time. The end result is a world where movie posters turn into trailers, textbook diagrams become tutorial videos, and newspapers/paintings come to life straight out of Harry Potter. These are just a few of the endless possibilities in how AR World can improve and transform the way we learn, entertain, and share information.
## What it Does
AR World is an Android app that recognizes images seen through the phone's camera and seamlessly replaces images with its corresponding video using AR Core. It also includes a React web app that allows users to upload their own image and video pairs. This allows businesses like publishers, news companies, or museums to create content for its customers. Individual users may also upload their own images and videos to customize their experience with the app.
## How We Built It
We built the mobile app using Android Studio, with Sceneform and AR Core on the backend to recognize images and map them to the corresponding videos. In the first pass, we recognized certain static images and replaced them with the appropriate videos. Then, we built a web page and API to accept more photo-video pairs that can be identified by the cameras of users. Thus, we needed AWS S3 to store these photos and videos, MediaConvert to convert them from MP4 format to a streamable DASH-ISO format, and CloudFront to serve the video streaming requests. Also, MongoDB was required to store a map from the image to the corresponding video link on S3.
## Challenges We Ran Into
There were several challenging aspects to our project. To start, streaming resources from AWS, GCP or some cloud storage provider onto a device on demand proved to be a hurdle.
* Getting the video to stay anchored in the real-world
* Recognizing a static image
* Automating the conversion from user-uploaded mp4 to a streamable format
* Dynamically updating the image-video pairing database
## Accomplishments That We're Proud Of
I think it's an accomplishment to have a fully functional app that can effectively recognize images within a certain set and replace it with the desired video from end to end (where the back-end, front-end, and infrastructure is complete).
## What We Learned
I think we learned that there are a lot of unexpected issues when connecting different resources, and it takes a lot of patience and debugging to work through them. None of us had worked with Kotlin or AR before!
## What's Next for AR World
For AR World, the next big step would be to create organizations in which people can be enrolled. This would help assign a group of picture-video pairs that pertain to a group (e.g. a group on a museum tour, to see more information about artifacts in the museum).
Another big step would be to give the full VR experience to the customer. With headsets like Google Cardboard, we can help people explore different parts of their environments simultaneously.
|
## Inspiration
The purchase of goods has changed drastically over the past decade, especially over the period of the pandemic. Although with these online purchases comes a drawback, the buyer can not see the product in front of them before buying it. This adds an element of uncertainty and undesirability to online shopping as this can cost the consumer time and the seller money in processing returns, in fact, a study showed that up to 40% of all online purchases are returned, and out of those returned items just 30% were resold to customers, with the rest going to landfills or other warehouses.
With this app, we hope to reduce the number of returns by putting the object the user wants to buy in front of them before they buy it so that they know exactly what they are getting.
## What it does
Say you are looking to buy a tv but are not sure if it will fit or how it will look in your home. You would be able to open the Ecomerce ARena Android app and browse the TVs on Amazon(since that's where you were planning to buy the TV from anyways). You can see all the info that Amazon has on the TV but then also use AR mode to view the TV in real life.
## How we built it
To build the app we used Unity, coding everything within the engine using C#. We used the native AR foundation function provided and then built upon them to get the app to work just right. We also incorporated EchoAr into the app to manage all 3d models and ensure the app is lean and small in size.
## Challenges we ran into
Augmented Reality development was new to all of us as well as was the Unity engine, having to learn and harness the power of these tools was difficult and we ran into a lot of problems building and getting the desired outcome. Another problem was how to get the models for each different product, we decided for this Hackathon to limit our scope to two types of products with the ability to keep adding more in the future easily.
## Accomplishments that we're proud of
We are really proud of the final product for being able to detect surfaces and use the augmented reality capabilities super well. We are also really happy that we were able to incorporate web scraping to get live data from Amazon, as well as the echo AR cloud integration.
## What we learned
We learned a great deal about how much work and how truly amazing it is to get augmented reality applications built even for those who look simple on the surface. There was a lot that changed quickly as this is still a new bleeding-edge technology.
## What's next for Ecommerce ARena
We hope to expand its functionality to cover a greater variety of products, as well as supporting other vendors aside from Amazon such as Best Buy and Newegg. We can also start looking into the process for releasing the app into the Google app store, might even look into porting it to Apple products.
|
## Inspiration
* Inspired by issues pertaining to present day social media that focus on more on likes and views as supposed to photo-sharing
+ We wanted to connect people on the internet and within communities in a positive, immersive experience
* Bring society closer together rather than push each other away
## What it does
* Social network for people to share images and videos to be viewed in AR
* removed parameters such as likes, views, engagement to focus primarily on media-sharing
## How we built it
* Used Google Cloud Platform as our VM host for our backend
* Utilized web development tools for our website
* Git to collaborate with teammates
* Unity and Vuforia to develop Ar
## Challenges we ran into
* Learning new software tools, but we all preserved and had each other's back.
* Using Unity and learning how to use Vuforia in real time
## Accomplishments that we're proud of
* Learning Git, and a bunch more new software that we have never touched!
* Improving our problem solving and troubleshooting skills
* Learning to communicate with teammates
* Basics of Ar
## What we learned
* Web development using HTML, CSS, JavaScript and Bootstrap
## What's next for ARConnect
* Finish developing:
* RESTful API
* DBM
* Improve UX by:
* Mobile app
* Adding depth to user added images (3d) in AR
* User accessibility
|
partial
|
## Inspiration
Every project aims to solve a problem, and to solve people's concerns. When we walk into a restaurant, we are so concerned that not many photos are printed on the menu. However, we are always eager to find out what a food looks like. Surprisingly, including a nice-looking picture alongside a food item increases sells by 30% according to Rapp. So it's a big inconvenience for customers if they don't understand the name of a food. This is what we are aiming for! This is where we get into the field! We want to create a better impression on every customer and create a better customer-friendly restaurant society. We want every person to immediately know what they like to eat and the first impression of a specific food in a restaurant.
## How we built it
We mainly used ARKit, MVC and various APIs to build this iOS app. We first start with entering an AR session, and then we crop the image programmatically to feed it to OCR from Microsoft Azure Cognitive Service. It recognized the text from the image, though not perfectly. We then feed the recognized text to a Spell Check from Azure to further improve the quality of the text. Next, we used Azure Image Search service to look up the dish image from Bing, and then we used Alamofire and SwiftyJSON for getting the image. We created a virtual card using SceneKit and place it above the menu in ARView. We used Firebase as backend database and for authentication. We built some interactions between the virtual card and users so that users could see more information about the ordered dishes.
## Challenges we ran into
We ran into various unexpected challenges when developing Augmented Reality and using APIs. First, there are very few documentations about how to use Microsoft APIs on iOS apps. We learned how to use the third-party library for building HTTP request and parsing JSON files. Second, we had a really hard time understanding how Augmented Reality works in general, and how to place virtual card within SceneKit. Last, we were challenged to develop the same project as a team! It was the first time each of us was pushed to use Git and Github and we learned so much from branches and version controls.
## Accomplishments that we're proud of
Only learning swift and ios development for one month, we create our very first wonderful AR app. This is a big challenge for us and we still choose a difficult and high-tech field, which should be most proud of. In addition, we implement lots of API and create a lot of "objects" in AR, and they both work perfectly. We also encountered few bugs during development, but we all try to fix them. We're proud of combining some of the most advanced technologies in software such as AR, cognitive services and computer vision.
## What we learned
During the whole development time, we clearly learned how to create our own AR model, what is the structure of the ARScene, and also how to combine different API to achieve our main goal. First of all, we enhance our ability to coding in swift, especially for AR. Creating the objects in AR world teaches us the tree structure in AR, and the relationships among parent nodes and its children nodes. What's more, we get to learn swift deeper, specifically its MVC model. Last but not least, bugs teach us how to solve a problem in a team and how to minimize the probability of buggy code for next time. Most importantly, this hackathon poses the strength of teamwork.
## What's next for DishPlay
We desire to build more interactions with ARKit, including displaying a collection of dishes on 3D shelf, or cool animations that people can see how those favorite dishes were made. We also want to build a large-scale database for entering comments, ratings or any other related information about dishes! We are happy that Yelp and OpenTable bring us more closed to the restaurants. We are excited about our project because it will bring us more closed to our favorite food!
|
## Inspiration
During last year's World Wide Developers Conference, Apple introduced a host of new innovative frameworks (including but not limited to CoreML and ARKit) which placed traditionally expensive and complex operations such as machine learning and augmented reality in the hands of developers such as myself. This incredible opportunity was one that I wanted to take advantage of at PennApps this year, and Lyft's powerful yet approachable API (and SDK!) struck me as the perfect match for ARKit.
## What it does
Utilizing these powerful technologies, Wher integrates with Lyft to further enhance the process of finding and requesting a ride by improving on ease of use, safety, and even entertainment. One issue that presents itself when using overhead navigation methods is, quite simply, a lack of the 3rd dimension. A traditional overhead view tends to complicate on foot navigation more than it may help, and even more importantly, requires the user to bury their face in their phone. This pulls attention from the users surroundings, and poses a threat to their safety- especially in busy cities. Wher resolves all of these concerns by bringing the experience of Lyft into Augmented Reality, which allows users to truly see the location of their driver and destination, pay more attention to where they are going, and have a more enjoyable and modern experience in the process.
## How I built it
I built Wher using several of Apple's Frameworks including ARKit, MapKit, CoreLocation, and UIKit, which allowed me to build the foundation for the app and the "scene" necessary to create and display an Augmented Reality plane. Using the Lyft API I was able to gather information regarding available drivers in the area, including their exact position (real time), cost, ETA, and the service they offered. This information was used to populate the scene and deep link into the Lyft app itself to request a ride and complete the transaction.
## Challenges I ran into
While both Apple's well documented frameworks and Lyft's API simplified the learning required to take on the project, there were still several technical hurdles that had to be overcome to complete the project. The first issue that I faced was Lyft's API itself; While it was great in many respects, Lyft has yet to create a branch fit for use with Swift 4 and iOS 11 (required to use ARKit), which meant I had to rewrite certain portions of their LyftURLEncodingScheme and LyftButton classes in order to continue with the project. Another challenge was finding a way to represent a variance in coordinates and 'simulate distance', so to make the AR experience authentic. This, similar to the first challenge, became manageable with enough thought and math. One of the last significant challenges I encountered and overcame was with drawing driver "bubbles" in the AR Plane without encountering graphics glitches.
## Accomplishments that I'm proud of
Despite the many challenges that this project presented, I am very happy that I persisted and worked to complete it. Most importantly, I'm proud of just how cool it is to see something so simple represented in AR, and how different it is from a traditional 2D View. I am also very proud to say that this is something I can see myself using any time I need to catch a Lyft.
## What I learned
With PennApps being my first Hackathon, I was unsure what to expect and what exactly I wanted to accomplish. As a result, I greatly overestimated how many features I could fit into Wher and was forced to cut back on what I could add. As a result, I learned a lesson in managing expectations.
## What's next for Wher (with Lyft)
In the short term, adding a social aspect and allowing for "friends" to organize and mark designated meet up spots for a Lyft, to greater simply the process of a night out on the town. In the long term, I hope to be speaking with Lyft!
|
## Inspiration
* None of my friends wanted to do an IOS application with me so I built an application to find friends to hack with
* I would like to reuse the swiping functionality in another personal project
## What it does
* Create an account
* Make a profile of skills, interests and languages
* Find matches to hack with based on profiles
* Check out my Video!
## How we built it
* Built using Swift in xCode
* Used Parse and Heroku for backend
## Challenges we ran into
* I got stuck on so many things, but luckily not for long because of..
* Stack overflow and youtube and all my hacking friends that lent me a hand!
## Accomplishments that we're proud of
* Able to finish a project in 36 hours
* Trying some of the
* No dying of caffeine overdose
## What we learned
* I had never made an IOS app before
* I had never used Heroku before
## What's next for HackMates
* Add chat capabilities
* Add UI
* Better matching algorithm
|
partial
|
## Inspiration
One of our own member's worry about his puppy inspired us to create this project, so he could keep an eye on him.
## What it does
Our app essentially monitors your dog(s) and determines their mood/emotional state based on their sound and body language, and optionally notifies the owner about any changes in it. Specifically, if the dog becomes agitated for any reasons, manages to escape wherever they are supposed to be, or if they fall asleep or wake up.
## How we built it
We built the behavioral detection using OpenCV and TensorFlow with a publicly available neural network. The notification system utilizes the Twilio API to notify owners via SMS. The app's user interface was created using JavaScript, CSS, and HTML.
## Challenges we ran into
We found it difficult to identify the emotional state of the dog using only a camera feed. Designing and writing a clean and efficient UI that worked with both desktop and mobile platforms was also challenging.
## Accomplishments that we're proud of
Our largest achievement was determining whether the dog was agitated, sleeping, or had just escaped using computer vision. We are also very proud of our UI design.
## What we learned
We learned some more about utilizing computer vision and neural networks.
## What's next for PupTrack
KittyTrack, possibly
Improving the detection, so it is more useful for our team member
|
Demo: <https://youtu.be/cTh3Q6a2OIM?t=2401>
## Inspiration
Fun Mobile AR Experiences such as Pokemon Go
## What it does
First, a single player hides a virtual penguin somewhere in the room. Then, the app creates hundreds of obstacles for the other players in AR. The player that finds the penguin first wins!
## How we built it
We used AWS and Node JS to create a server to handle realtime communication between all players. We also used Socket IO so that we could easily broadcast information to all players.
## Challenges we ran into
For the majority of the hackathon, we were aiming to use Apple's Multipeer Connectivity framework for realtime peer-to-peer communication. Although we wrote significant code using this framework, we had to switch to Socket IO due to connectivity issues.
Furthermore, shared AR experiences is a very new field with a lot of technical challenges, and it was very exciting to work through bugs in ensuring that all users have see similar obstacles throughout the room.
## Accomplishments that we're proud of
For two of us, it was our very first iOS application. We had never used Swift before, and we had a lot of fun learning to use xCode. For the entire team, we had never worked with AR or Apple's AR-Kit before.
We are proud we were able to make a fun and easy to use AR experience. We also were happy we were able to use Retro styling in our application
## What we learned
-Creating shared AR experiences is challenging but fun
-How to work with iOS's Multipeer framework
-How to use AR Kit
## What's next for ScavengAR
* Look out for an app store release soon!
|
## Inspiration
Coming into DeltaHacks, we wanted to build something that would improve the delivery of education to people who require different learning strategies. We decided to focus on students with ADHD. People with ADHD are able to focus, but only for a short period of time before they get distracted. Our extension puts them into a position to best use their specific strengths.
## What it does
Focus Pocus divides the user's screen-time into work and leisure. The user is displayed with their work for a certain amount of time. In that time, they may not access anything else. After the interval ends, they are given a short break and presented a leisurely activity, like a youtube video, before continuing their work again. In this way, it provides structure to their breaks and they are able to work better when they are required to.
## How we built it
We built it using JavaScript and JQuery and css grid. Was designed with a simple UI in order to increase productivity and learning.
## Challenges we ran into
We were very happy with our idea, but we had very limited experience with JavaScript and web development. It was also our first time building a Chrome extension. As a result, it took a lot of time to understand how JavaScript actually works and to implement it.
## Accomplishments that we're proud of
We're very proud that we came up with an idea that could really make positive change for a lot of people. We're also happy that we were able to have a working demo, despite our limited working knowledge of the technologies used.
## What we learned
We learned a lot about JavaScript and how to use it and how to build extensions. We also learned about the various libraries of JavaScript, like React and Angular, and look forward to implementing those in the future.
## What's next for Focus Pocus
-More customizability - such as custom leisure activity and interval timings.
-Ability to check if answers are correct using AI.
-Storing user's completed homework and tasks on the Cloud.
-Fix Chrome extension feature
|
winning
|
## Inspiration
While we were brainstorming for ideas, we realized that two of our teammates are international students from India also from the University of Waterloo. Their inspiration to create this project was based on what they had seen in real life. Noticing how impactful access to proper healthcare can be, and how much this depends on your socioeconomic status, we decided on creating a healthcare kiosk that can be used by people in developing nations. By designing interface that focuses heavily on images; it can be understood by those that are illiterate as is the case in many developing nations, or can bypass language barriers. This application is the perfect combination of all of our interests; and allows us to use tech for social good by improving accessibility in the healthcare industry.
## What it does
Our service, Medi-Stand is targeted towards residents of regions who will have the opportunity to monitor their health through regular self administered check-ups. By creating healthy citizens, Medi-Stand has the potential to curb the spread of infectious diseases before they become a bane to the society and build a more productive society. Health care reforms are becoming more and more necessary for third world nations to progress economically and move towards developed economies that place greater emphasis on human capital. We have also included supply-side policies and government injections through the integration of systems that have the ability to streamline this process through the creation of a database and eliminating all paper-work thus making the entire process more streamlined for both- patients and the doctors. This service will be available to patients through kiosks present near local communities to save their time and keep their health in check. By creating a profile the first time you use this system in a government run healthcare facility, users can create a profile and upload health data that is currently on paper or in emails all over the interwebs. By inputting this information for the first time manually into the database, we can access it later using the system we’ve developed. Overtime, the data can be inputted automatically using sensors on the Kiosk and by the doctor during consultations; but this depends on 100% compliance.
## How I built it
In terms of the UX/UI, this was designed using Sketch. By beginning with the creation of mock ups on various sheets of paper, 2 members of the team brainstormed customer requirements for a healthcare system of this magnitude and what features we would be able to implement in a short period of time. After hours of deliberation and finding ways to present this, we decided to create a simple interface with 6 different screens that a user would be faced with. After choosing basic icons, a font that could be understood by those with dyslexia, and accessible colours (ie those that can be understood even by the colour blind); we had successfully created a user interface that could be easily understood by a large population.
In terms of developing the backend, we wanted to create the doctor’s side of the app so that they could access patient information. IT was written in XCode and connects to Firebase database which connects to the patient’s information and simply displays this visually on an IPhone emulator. The database entries were fetched in Json notation, using requests.
In terms of using the Arduino hardware, we used Grove temperature sensor V1.2 along with a Grove base shield to read the values from the sensor and display it on the screen. The device has a detectable range of -40 to 150 C and has an accuracy of ±1.5 C.
## Challenges I ran into
When designing the product, one of the challenges we chose to tackle was an accessibility challenge. We had trouble understanding how we can turn a healthcare product more accessible. Oftentimes, healthcare products exist from the doctor side and the patients simply take home their prescription and hold the doctors to an unnecessarily high level of expectations. We wanted to allow both sides of this interaction to understand what was happening, which is where the app came in. After speaking to our teammates, they made it clear that many of the people from lower income households in a developing nation such as India are not able to access hospitals due to the high costs; and cannot use other sources to obtain this information due to accessibility issues. I spent lots of time researching how to make this a user friendly app and what principles other designers had incorporated into their apps to make it accessible. By doing so, we lost lots of time focusing more on accessibility than overall design. Though we adhered to the challenge requirements, this may have come at the loss of a more positive user experience.
## Accomplishments that I'm proud of
For half the team, this was their first Hackathon. Having never experienced the thrill of designing a product from start to finish, being able to turn an idea into more than just a set of wireframes was an amazing accomplishment that the entire team is proud of.
We are extremely happy with the UX/UI that we were able to create given that this is only our second time using Sketch; especially the fact that we learned how to link and use transactions to create a live demo. In terms of the backend, this was our first time developing an iOS app, and the fact that we were able to create a fully functioning app that could demo on our phones was a pretty great feat!
## What I learned
We learned the basics of front-end and back-end development as well as how to make designs more accessible.
## What's next for MediStand
Integrate the various features of this prototype.
How can we make this a global hack?
MediStand is a private company that can begin to sell its software to the governments (as these are the people who focus on providing healthcare)
Finding more ways to make this product more accessible
|
## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
|
## Inspiration
Not wanting to keep moving my stuff around all the time while moving between SF and Waterloo, Canada.
## What it does
It will call a Postmate to pick up your items, which will then be delivered to our secure storage facility. The Postmate will be issued a one time use code for the lock to our facility, and they will store the item. When the user wants their item back, they will simply request it and it will be there in minutes.
## How I built it
The stack is Node+Express and the app is on Android. It is hosted on Azure. We used the Postmate and Here API
## Challenges I ran into
## Accomplishments that I'm proud of
A really sleek and well built app! The API is super clean, and the Android interface is sexy
## What I learned
## What's next for Stockpile
Better integrations with IoT devices and better item management.
|
winning
|
## What it does
Danstrument lets you video call your friends and create music together using only your actions. You can start a call which generates a code that your friend can use to join.
## How we built it
We used Node.js to create our web app which employs WebRTC to allow video calling between devices. Movements are tracked with pose estimation from tensorflow and then vector calculations are done to trigger audio files.
## Challenges we ran into
Connecting different devices with WebRTC over an unsecured site proved to be very difficult. We also wanted to have continuous sound but found that libraries that could accomplish this caused too many problems so we chose to work with discrete sound bites instead.
## What's next for Danstrument
Annoying everyone around us.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
## Inspiration
As time goes on, you and your friends go down different paths in life. The days when you all played games at your house are long in the past. We want to bring people back to these games. **Arcade Dancer** is a spin on **Just Dance**, reminiscing the Wii days while playing nostalgic songs. Wanna go to the beach beach? Catch a wave with **Nicki Minaj**. **Katy Perry** knows a place where the grass is greener! *(Pls get the reference)*
## What it does
Users join a virtual room using the AgoraRTC platform for real-time video and audio streaming.
The application uses AgoraRTC to handle video communication among participants.
Pose detection is implemented using the MediaPipe library in Python for detecting human body poses.
Pose detection is performed on the server side to analyze the dance moves of participants.
The frontend of the application is built using React. Users can see a dance floor with video feeds of participants.
The application includes game logic where users can start, pause, reset, and clear dance sessions.
Pose comparison is implemented, possibly for scoring or evaluating dance performance.
The frontend communicates with a server (implemented in Python, potentially with Flask and Socket.IO). Frames from the AgoraRTC video stream are periodically sent to the server for pose detection.
Users can choose music or tunes for the dance session. There are controls for audio and video settings, and the ability to switch display modes (tile, pip, etc.). Users can hide remote streams and exit the virtual dance room.
## How we built it
Frontend (React):
The frontend of the application was developed using React, a JavaScript library for building user interfaces. React components were used to create the various parts of the user interface, such as the dance floor display, player list, buttons, and music picker. React Router may have been used for managing navigation between different pages or components within the application. Styling was applied using CSS to design the layout, colors, and visual elements of the user interface.
Video Streaming (AgoraRTC):
AgoraRTC, a real-time communication SDK, was integrated into the frontend to handle video streaming and communication between users. AgoraRTC provides APIs for establishing real-time audio and video connections, managing streams, and handling user events such as joining and leaving channels. Users can see live video feeds of other participants in the virtual dance room.
Pose Detection (MediaPipe):
Pose detection functionality was implemented using MediaPipe, an open-source framework developed by Google for building machine learning pipelines. MediaPipe's pose detection model was used to analyze video frames and detect human body poses in real-time. The pose detection logic may have been implemented on the server side, possibly using Python with Flask for the server framework.
Server Communication (Socket.IO):
Socket.IO, a JavaScript library for real-time web applications, may have been used for server-client communication.
The frontend communicates with the server to send and receive data, such as new player notifications, pose detection results, and game control commands.
The server-side logic handles incoming requests from clients, manages socket connections, and broadcasts messages to connected clients.
Additional Features:
Additional features such as game controls, music selection, and dance session management were implemented using a combination of React components and server-side logic.
## Challenges we ran into
Sorry for no audio on youtube vid :(
* Mediapipe refusing to install because of Python version and venv issues lol.
* AgoraRTC deciding to not display video anymore lol.
* Pose detection data was hard to clean lol.
* Connecting users to a host's room
* Sleep
## Accomplishments that we're proud of
Basically, this *bad boy* is real-time video calling ! It was very hard to get everyone into a room. On top of that, having the perform pose detection and analysis was also a hurdle and we ain't hurdlers. An area everyone in the group had yet to work with!
## What we learned
* AgoraRTC when I catch you AgoraRTC...
* Nick learned what mogging and looksmaxxing were.
## What's next for Arcade Dancer
* More songs! **(Comment recs)**
* More slang to teach Nick.
|
winning
|
# Inspiration
Living alone in the city has made us very lonely people. We decided that the perfect way to amend this problem was to program a companion.
# What it does
The companion, Bob will get hungry, unhappy, sick and poop; if is up to the user to keep him from dying from neglection.
# How we built it
Using an **Arduino** with a LCD screen and three buttons, it is a simple product.
# Challenges we ran into
We did not have enough parts to make the circuit, and had to solder the wires to the LCD screen, so we decided to used 123D circuits by Autodesk instead. It still doesn't work.
# Accomplishments that we're proud of
Bob asks the user "Please love me <3".
# What we learned
How to use/program a LCD screen, and how to stay vigilant.
# What's next for Tomodatchi
Coming in 3D to a computer near you!
|
## Inspiration
Once I heard about Velostat, I knew I had to do something with it. I have come up with so many ideas for projects, but this is the first time I have had a chance to use it. This project is a first step into the pressure sensitive world of Velostat.
## What it does
A matrix of wires across the Velostat feeds into the analog inputs of an Arduino board. From there, the data is sent serially to a computer where we run calculations and plot the surface in Mathematica.
## How we built it
We started with just a simple closed circuit with the Velostat as a variable resistor and calibrated it as a single force sensor. We then created a matrix of conductive tape to run across the Velostat at 25 different points. As we developed the hardware, we decided to implement some Mathematica code to visualize our results.
## Challenges we ran into
To calibrate the pressure sensor, we needed a series of data points to fit a curve. However, we had no access to any known masses or a scale. We did have some 500mL water bottles. To approximate the pressure resistance curve, we poured water from one bottle into another resting on the sensor. We sampled at a constant rate and tried to keep the flow rate consistent too. After a few tries, we found a curve that fit.
At first, when expanding the sensor to a matrix, we neglected to account for the huge change in resistance. For hours we struggled to figure out what was wrong. Even once we figured it out, we had no solution. Then, we realized that we had some transistors, so we could make switches to get around the output impedance problem. We eventually found a new power source to use with the switches and saw immediate success in our results.
## Accomplishments that we're proud of
We are proud of the improvised solutions we came up with in the face of limited resources.
## What we learned
Neither of us had ever used Mathematica before, but we were happy to pick some up. Shout out to Kyle from Wolfram who was a tremendous help.
## What's next for SmartSurface
We hope to develop the Velostat pressure matrix for more specific applications. It has a wide variety of possible uses, from healthcare to security. We'd like to further develop the technology and explore these areas.
|
# **Cough It**
#### COVID-19 Diagnosis at Ease
## Inspiration
As the pandemic has nearly crippled all the nations and still in many countries, people are in lockdown, there are many innovations in these two years that came up in order to find an effective way of tackling the issues of COVID-19. Out of all the problems, detecting the COVID-19 strain has been the hardest so far as it is always mutating due to rapid infections.
Just like many others, we started to work on an idea to detect COVID-19 with the help of cough samples generated by the patients. What makes this app useful is its simplicity and scalability as users can record a cough sample and just wait for the results to load and it can give an accurate result of where one have the chances of having COVID-19 or not.
## Objective
The current COVID-19 diagnostic procedures are resource-intensive, expensive and slow. Therefore they are lacking scalability and retarding the efficiency of mass-testing during the pandemic. In many cases even the physical distancing protocol has to be violated in order to collect subject's samples. Disposing off biohazardous samples after diagnosis is also not eco-friendly.
To tackle this, we aim to develop a mobile-based application COVID-19 diagnostic system that:
* provides a fast, safe and user-friendly to detect COVID-19 infection just by providing their cough audio samples
* is accurate enough so that can be scaled-up to cater a large population, thus eliminating dependency on resource-heavy labs
* makes frequent testing and result tracking efficient, inexpensive and free of human error, thus eliminating economical and logistic barriers, and reducing the wokload of medical professionals
Our [proposed CNN](https://dicova2021.github.io/docs/reports/team_Brogrammers_DiCOVA_2021_Challenge_System_Report.pdf) architecture also secured Rank 1 at [DiCOVA](https://dicova2021.github.io/) Challenge 2021, held by IISc Bangalore researchers, amongst 85 teams spread across the globe. With only being trained on small dataset of 1,040 cough samples our model reported:
Accuracy: 94.61%
Sensitivity: 80% (20% false negative rate)
AUC of ROC curve: 87.07% (on blind test set)
## What it does
The working of **Cough It** is simple. User can simply install the app and tap to open it. Then, the app will ask for user permission for external storage and microphone. The user can then just tap the record button and it will take the user to a countdown timer like interface. Playing the play button will simply start recording a 7-seconds clip of cough sample of the user and upon completion it will navigate to the result screen for prediction the chances of the user having COVID-19
## How we built it
Our project is divided into three different modules -->
#### **ML Model**
Our machine learning model ( CNN architecture ) will be trained and deployed using the Sagemaker API which is apart of AWS to predict positive or negative infection from the pre-processed audio samples. The training data will also contain noisy and bad quality audio sample, so that it is robust for practical applications.
#### **Android App**
At first, we prepared the wireframe for the app and decided the architecture of the app which we will be using for our case. Then, we worked from the backend part first, so that we can structure our app in proper android MVVM architecture. We constructed all the models, Retrofit Instances and other necessary modules for code separation.
The android app is built in Kotlin and is following MVVM architecture for scalability. The app uses Media Recorder class to record the cough samples of the patient and store them locally. The saved file is then accessed by the android app and converted to byte array and Base64 encoded which is then sent to the web backend through Retrofit.
#### **Web Backend**
The web backend is actually a Node.js application which is deployed on EC2 instance in AWS. We choose this type of architecture for our backend service because we wanted a more reliable connection between our ML model and our Node.js application.
At first, we created a backend server using Node.js and Express.js and deployed the Node.js server in AWS EC2 instance. The server then receives the audio file in Base64 encoded form from the android client through a POST request API call. After that, the file is getting converted to .wav file through a module in terminal through command. After successfully, generating the .wav file, we put that .wav file as argument in the pre-processor which is a python script. Then we call the AWS Sagemaker API to get the predictions and the Node.js application then sends the predictions back to the android counterpart to the endpoint.
## Challenges we ran into
#### **Android**
Initially, in android, we were facing a lot of issues in recording a cough sample as there are two APIs for recording from the android developers, i.e., MediaRecorder, AudioRecord. As the ML model required a .wav file of the cough sample to pre-process, we had to generate it on-device. It is possible with AudioRecord class but requires heavy customization to work and also, saving a file and writing to that file, is a really tedious and buggy process. So, for android counterpart, we used the MediaRecorder class and saving the file and all that boilerplate code is handled by that MediaRecorder class and then we just access that file and send it to our API endpoint which then converts it into a .wav file for the pre-processor to pre-process.
#### **Web Backend**
In the web backend side, we faced a lot of issues in deploying the ML model and to further communicate with the model with node.js application.
Initially, we deployed the Node.js application in AWS Lamdba, but for processing the audio file, we needed to have a python environment as well, so we could not continue with lambda as it was a Node.js environment. So, to actually get the python environment we had to use AWS EC2 instance for deploying the backend server.
Also, we are processing the audio file, we had to use ffmpeg module for which we had to downgrade from the latest version of numpy library in python to older version.
#### **ML Model**
The most difficult challenge for our ml-model was to get it deployed so that it can be directly accessed from the Node.js server to feed the model with the MFCC values for the prediction. But due to lot of complexity of the Sagemaker API and with its integration with Node.js application this was really a challenge for us. But, at last through a lot of documentation and guidance we are able to deploy the model in Sagemaker and we tested some sample data through Postman also.
## Accomplishments that we're proud of
Through this project, we are proud that we are able to get a real and accurate prediction of a real sample data. We are able to send a successful query to the ML Model that is hosted on Sagemaker and the prediction was accurate.
Also, this made us really happy that in a very small amount we are able to overcome with so much of difficulties and also, we are able to solve them and get the app and web backend running and we are able to set the whole system that we planned for maintaining a proper architecture.
## What we learned
Cough It is really an interesting project to work on. It has so much of potential to be one of the best diagnostic tools for COVID-19 which always keeps us motivated to work on it make it better.
In android, working with APIs like MediaRecorder has always been a difficult position for us, but after doing this project and that too in Kotlin, we feel more confident in making a production quality android app. Also, developing an ML powered app is difficult and we are happy that finally we made it.
In web, we learnt the various scenarios in which EC2 instance can be more reliable than AWS Lambda also running various script files in node.js server is a good lesson to be learnt.
In machine learning, we learnt to deploy the ML model in Sagemaker and after that, how to handle the pre-processing file in various types of environments.
## What's next for Untitled
As of now, our project is more focused on our core idea, i.e., to predict by analysing the sample data of the user. So, our app is limited to only one user, but in future, we have already planned to make a database for user management and to show them report of their daily tests and possibility of COVID-19 on a weekly basis as per diagnosis.
## Final Words
There is a lot of scope for this project and this project and we don't want to stop innovating. We would like to take our idea to more platforms and we might also launch the app in the Play-Store soon when everything will be stable enough for the general public.
Our hopes on this project is high and we will say that, we won't leave this project until perfection.
|
losing
|
## Inspiration
There is a growing number of people sharing gardens in Montreal. As a lot of people share apartment buildings, it is indeed more convenient to share gardens than to have their own.
## What it does
With that in mind, we decided to create a smart garden platform that is meant to make sharing gardens as fast, intuitive, and community friendly as possible.
## How I built it
We use a plethora of sensors that are connected to a Raspberry Pi. Sensors range from temperature to light-sensitivity, with one sensor even detecting humidity levels. Through this, we're able to collect data from the sensors and post it on a google sheet, using the Google Drive API.
Once the data is posted on the google sheet, we use a python script to retrieve the 3 latest values and make an average of those values. This allows us to detect a change and send a flag to other parts of our algorithm.
For the user, it is very simple. They simply have to text a number dedicated to a certain garden. This will allow them to create an account and to receive alerts if a plant needs attention.
This part is done through the Twilio API and python scripts that are triggered when the user sends an SMS to the dedicated cell-phone number.
We even thought about implementing credit and verification systems that allow active users to gain points over time. These points are earned once the user decides to take action in the garden after receiving a notification from the Twilio API. The points can be redeemed through the app via Interac transfer or by simply keeping the plant once it is fully grown. In order to verify that the user actually takes action in the garden, we use a visual recognition software that runs the Azure API. Through a very simple system of QR codes, the user can scan its QR code to verify his identity.
|
## Inspiration
After learning about the current shortcomings of disaster response platforms, we wanted to build a modernized emergency services system to assist relief organizations and local governments in responding faster and appropriately.
## What it does
safeFront is a cross between next-generation 911 and disaster response management. Our primary users are local governments and relief organizations. The safeFront platform provides organizations and governments with the crucial information that is required for response, relief, and recovery by organizing and leveraging incoming disaster related data.
## How we built it
safeFront was built using React for the web dashboard and a Flask service housing the image classification and natural language processing models to process the incoming mobile data.
## Challenges we ran into
Ranking urgency of natural disasters and their severity by reconciling image recognition, language processing, and sentiment analysis on mobile data and reported it through a web dashboard. Most of the team didn't have a firm grasp on React components, so building the site was how we learned React.
## Accomplishments that we're proud of
Built a full stack web application and a functioning prototype from scratch.
## What we learned
Stepping outside of our comfort zone is, by nature, uncomfortable. However, we learned that we grow the most when we cross that line.
## What's next for SafeFront
We'd like to expand our platform for medical data, local transportation delays, local river level changes, and many more ideas. We were able to build a fraction of our ideas this weekend, but we hope to build additional features in the future.
|
## Inspiration
We wanted to take away the stresses of all plant parents around the world. Leaving your plant children at home without any water while you vacation in the Bahamas - every plant parents worst nightmare! However, we have a solution!
According to Statistics Canada, there were approximately 885.58 million Canadian dollars worth of potted plants sold in Canada in 2020, an increase from around 789.68 million in 2017. Most house plants, however, can only last 1-2 weeks without being watered. This can be a source of concern for people who need their plants watered when they are on vacation, and/or businesses that close during holidays and have plenty of indoor plants.
We have created an IOS app that makes it easy for people to have their plants watered for them. Studies have shown that indoor plants can clean indoor air by absorbing toxins and increasing humidity; they also boost moods, productivity, and creativity while also reducing stress levels. Not only will this platform make people’s lives easier, but it will also encourage people to buy more house plants, by eliminating a major deterrent that discourages people from buying plants.
## What it does
People sign-up as plant-keepers by entering their information and location, and will be contacted when someone needs their plants watered. On the other side, people in need of plant-keeping have access to a list of all plant-keepers in their area, and have access to their contact information.
## How we built it
We built our project using Swift, a general-purpose, multi-paradigm, compiled programming language developed by Apple, and XCode, Apple's integrated development environment for macOS. To store the list of users, we created a database in Firebase and utilised GCP. Our app queries the database in real time to get information regarding plant keepers.
## Challenges we ran into
Our team members are extremely passionate about the environment. So, we wanted to develop an app that encouraged sustainability. At the same time we wanted it to provide people with tangible, real-world benefits and also educate them. We also wanted to build an app with a clean design. All of this involved a lot of out-of-the-box thinking.
Secondly, this was the first time we actually built an app using swift from scratch, that also integrated a real time database. This caused some challenges along the way.
## Accomplishments that we're proud of
We are proud of building an application that could benefit people who want to grow indoor plants. We are also proud of our clean designs and straight-forward user interface.
## What we learned
Firstly, we learned how to communicate and collaborate in a team, within a very short period of time. Determining how to divide tasks was tricky in the beginning, but as we delineated our project plan, tasks became easier to divide. We also learned how to build mobile applications with Swift and how to use Firebase. This encouraged us to learn more about mobile development and experiment with more tools.
## What's next for Plant Keeper
First, Plant Keeper needs a map feature that could make it easier for users to find plant-keepers in their area. Second, we plan to add in-app messaging so as to simplify communication between plant keepers and those in need of plant keeping.
|
winning
|
# SWYPE
## Inspiration
At the start of this Hackathon, our team had no idea what to code. As such, we decided to think about a more pressing matter of what we wanted to eat that night. That's when we realized we had no clue. We pulled up UberEats and were overwhelmed with so many options and thought that it would be so much easier if we could look at them in smaller chunks and thus, Swype was born.
## What it does
Swype is a mobile app in which the user can input different criteria such as their location, preferred price range, cuisine of choice, and be able to browse through curated set of restaurants in their area. These restaurants are present in a user-friendly format allowing them to "swype" through the options. Once they have chosen an option they are interested in, Swype provided them with information on the restaurant and allows them to order.
## How we built it
The UI/UX was done with Figma. Swype was built using React Native v4.x and Expo. The APIs used were: Yelp Fusion, Google Distance Matrix, and Google Geolocation.
## Challenges we ran into
By far the largest challenges we encountered stemmed from two main task: finding a swipe component which would work our app and getting the Yelp API to work. We ended up trying about 3/4 different Swipe components before we were finally able to settle on a Swipe component that wouldn't crash and would allow use to render what we wanted to. With using the Yelp API we struggled to get the call to the API to correctly return what we expected as the documentation was unclear.
## Accomplishments that we're proud of
We're really proud of our execution of the app, despite the hardships and difficulties we faced. Additionally, we felt really accomplished having been able to complete a project of this scope within the amount of time that the hackathon spanned across.
## What we learned
One of the largest learning curves for our group was working on the frontend of the project. All three of us are primarily more experience with backend development and as such working with Figma and the styling was a challenging piece of the app. As well, with a majority of our experience being in web design, it was really interesting to learn how different and challenging app development.
## What's next for Swype
We originally decided on the restaurant application for Swype based on us not knowing what to eat, however, after thinking further, we believe that Swype has applications across a person's everyday decision making. We would like to be able to add the utility for our users to be able to choose from a wider variety of topics and as well have the option to input their own options/choices to aid a variety of decisions.
|
## Inspiration
It is estimated that 5700 children become orphans each day and this number would only increase during the pandemic. Harshil was inspired by Instant Family which inspired him to want to help tackle this problem by creating Bubble. An app that allow various organizations to be on one platform and allow families to connect and attend their events.
## What it does
Bubble is a non profit app designed to be utilized by foster organizations as well as potential foster homes. Organizations can register themselves and will have the ability to host a variety of events and post about their programs. Potential foster homes and families in the nearby vicinity can then search through their pages on our app to utilize said services. Our app highlights local foster organizations to facilitate ease of navigating organization's main web pages.
## How we built it
We used Firestore for our backend, Xcode with Swift, and Android Studio with Kotlin.
## Challenges we ran into
We had some technical difficulties with Android Studio and had to adapt to each other's skill level to be able to build the app. Our teammate was unable to be at most of the hackathon so the rest of the team made changes to the database.
## Accomplishments that we're proud of
We did not know each other prior to the hackathon able were to work together and be able to communicate with one another despite the time differences. We were able to accommodate each other's comfort level of using Firestore and use of our tech stacks.
## What we learned
One challenging aspect of this year's Hackathon is the obstacle of time. While creating the project, the time difference between the different team members hindered our ability and productivity but in the end we came together to create a truly unique app using varying software with little to no experience. Even though we were opposite ends of the globe, we had common interests and worked as a team to achieve our goal.
## What's next for Bubble
We would like to implement a feature that has users connect their Facebook account or social network account to their profiles, so that families
|
# Omakase
*"I'll leave it up to you"*
## Inspiration
On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how?
## What It Does
We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients.
## What We Learned
Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more.
## How We Built It
We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP.
## What We Are Proud Of
We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa.
## Challenges You Faced
Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult.
## Whats Next
We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
|
losing
|
## Inspiration
After the political unrest in Catalonia, Spain, where voting for the independence referendum was disrupted by police, we decided that the polling system of today needs a dramatic overhaul. Traditional voting systems are 1) centralized, 2) non-anonymous, and 3) vulnerable to fraud. Blockchain technology, especially Ethereum smart contracts, present the perfect solution to these problems. Thus, we set off on constructing a framework that enables groups to hold open-source, yet perfectly anonymous and secure polling systems.
## What it does
DeVote provides the base code for anyone seeking to host a free, fair, and anonymous vote. Through our simple interface, the complex task of writing a smart contract and interfacing with the Ethereum blockchain can be greatly simplified. Through our framework, anyone can set up an election, defining the candidates running for election and the valid Ethereum addresses that are allowed to vote. This accomplishes a few major tasks. First off, by defining which addresses are allowed to submit votes, we can separate valid voters from invalid voters. Secondly, instead of tying one vote to one name (which poses issues in politically unstable environments), voting is completely anonymous. After an address has voted, its value is discarded, so the the anonymity of the vote sender is preserved. Finally, since all votes are tallied by each node in the blockchain, the vote count is confirmed through consensus. This removes any chance of fraud in the counting of votes (think Florida circa. 2000), and results are immediate and public.
This avoids the pitfalls of traditional voting systems. When an election is controlled by a single polling entity (generally the state), we trust the entity to count the votes, validate who's allowed to vote, and keep the vote of each individual secret. In many cases, this trust does not exist. DeVote is targeted at areas that would benefit greatly from decentralized voting systems, one that is decentralized, open-source, secure, and anonymous.
## How we built it
Our framework can be broken down into two major components: backend, which is served by a Solidity smart contract running the Ethereum Virtual Machine, and frontend, which uses the web3 API to communicate with the blockchain to read and write data.
The backend is ran by the EVM, which means its execution is done over the entire Ethereum blockchain. This accomplishes the goal of decentralization we set forth in our vision. Meanwhile, the frontend loads candidate data and also sends votes to the Ethereum nodes. Since DeVote requires Ether (gas) to process modifications to the data on the blockchain, apps built on DeVote require the use of Ethereum browsers. The simplest choice is MetaMask, a Google Chrome extension that can handle all Ether functions.
The website (temporarily hosted on michaelman.net), is a sample client built with DeVote. We set up an election for the next president of Princeton, and accept votes from a predefined set of addresses.
## Challenges we ran into
The biggest challenge for the team was probably getting accustomed to the technologies we were using. While the team had at least basic knowledge of the functions of blockchain, none of us had ever developed for it before, so learning the tools we were using (Solidity, Truffle, working with smart contracts) was a challenge.
Furthermore, debugging these technologies (specifically with MetaMask, a browser extension that our web app implements) was tedious and seemingly never straightforward, as we still had to access the blockchain to debug.
Finally, for the final step of syncing up with Ethereum to make our website live and fully functional, the task of actually syncing turned out to be a lot more time consuming and buggy than we anticipated.
## Accomplishments that we're proud of
We are proud that we successfully navigated the complex nature of building decentralized applications, and we are very happy to have our smart contract being executed on the Ethereum blockchain.
## What we learned
We learned how to write smart contracts in Solidity and deploy them to the Ethereum blockchain. Furthermore, writing our frontend code helped us better understand how to structure asynchronous API calls and create callback functions to process the data from the EVM.
## What's next for DeVote
As for short term goals, we want to further develop this framework so that it's even more intuitive to use, holds less dependencies, and optimize it further for mobile platforms. In the long term, we aim to implement this technology in large government-sponsored elections, especially in politically unstable areas, in order to meet the initial vision of the project.
|
# Ether on a Stick
Max Fang and Philip Hayes
## What it does
Ether on a Stick is a platform that allows participants to contribute economic incentives for the completion of arbitrary tasks. More specifically, it allows participants to pool their money into a smart contract that will pay out to a specified target if and only if a threshold percentage of contributors to the pool, weighted by contribution amount, votes that the specified target has indeed carried out an action by a specified due date.
Example: A company pollutes a river, negatively affecting everyone nearby. Residents would like the river to be cleaned up, and are willing to pay for it, but only if the river is cleaned up. Solution: Residents use Ether on a Stick to pool their funds together that will pay out to the company if and only if a specified proportion of contributors to the pool vote that the company has indeed cleaned up the river.
## How we built it
Ether on a Stick implements with code a game theoretical mechanism called a Dominant Assurance Contract that coordinates the voluntary creation of public goods in the face of the free rider problem. It is a decentralized app (or "dapp") built on the Ethereum network, implementing a "smart contract" in Serpent, Ethereum's Python-like contract language. Its decentralized and trustless nature enables the creation of agreements without a 3rd party escrow who can be influenced or corrupted to determine the wrong user.
## Challenges
The first 20 hours of the hackathon were mostly spent setting up and learning how to use the Ethereum client and interact with the network. A significant portion was also spent planning the exact specifications of the contract and deciding what mechanisms would make the network most resistant to attack. Despite the lack of any kind of API reference, writing the contract itself was easier, but deploying it to Ethereum testnet was another challenge, as large swaths of the underlying technology hasn't been built yet.
## What's next for Ether on a Stick
We'd like to take a step much closer to a game-theoretically sound system (don't quote us, we haven't written a paper on it) by implementing a sort of token-based reputation system, similar to that of Augur. In this system, a small portion of pooled funds are set aside to be rewarded to reputation token bearing oracles that correctly vote on outcomes of events. "Correctly voting" means voting with the majority of the other randomly selected oracles for a given event. We would also have to restrict events to only those which are easily and publically verifiable; however, by decoupling voting from contribution, this bypasses a Sybil attack wherein malicious actors (or the contract-specified target of the funds) can use a large amount of financial capital to sway the vote in their favor.
|
## See our live demo!
**On Rinkeby testnet blockchain (recommended):** <https://rinkeby.kelas.dev>
**On xDAI blockchain (Warning: uses real money):** <https://xdai.kelas.dev>
## Check out our narrative StoryMaps here!
**Greenery in your Community:** <https://arcg.is/1vu448>
**Culture & Diversity in choosing your Home:** <https://arcg.is/DH511>
## Inspiration
BlockFund's mission is to build a platform to empower communities with tools and data. We aim to improve outcomes in **community civic engagement and community sustainability.**
*How we do so, BlockFund:*
1. Democratises community funds through blockchain and voting technology - allowing community members to submit their own project proposals and vote.
2. Highlights the need for community environment sustainability projects through identifying local areas lacking in tree foliage. Importantly, we educate the community through a narrative in an ArcGIS StoryMap. Image processing and deep learning science enables the identification of even smallest tree's foliage. **TeamTreesMini**
3. Aids potential new residents-to-be and migrants in looking for home (and community) that fits their unique culture heritage, beliefs and diversity needs, through outlining demographic breakdowns, religious institutions, and ammenities. Also educating the importance and factors to consider through a narrative in an ArcGIS StoryMap.
**1. Democratises community funds through blockchain and voting technology**
In the US, Homeowner Associations (HOA) are the main medium in which residents members pay community upkeep fees to maintain grounds, master insurance, community utilities, as well as overall community finances. Financial Transparency varies between HOAs, but often they only reflect past fund usage and the choices of a few representative members. We sought a solution that democratises the funding of projects process – allowing residents to contribute and vote for projects that **actually matter** to them. It's easy for community minorities to go unheard, so our voting system helps to account for that. We adjust and increase the voting weight of residents whose vote has not funded a successful project after a few attempts – thus improving the representation of minorities in any community.
**2. Highlights the need for environment sustainability projects #TeamTreesMini**
Additionally, we empower communities to engage in green urban planning. We mimic #TeamTrees on a communal scale. Climate change is an increasingly prevalent topic, and we believe illustrating the dangers in your backyard is an excellent way to encourage local action. Our StoryMap solution maps the green foliage coverage in your neighbourhood. Then, we empower the community in proposing projects on the platform to fund tree planting in each home and in common areas.
**3. Your home, why Cultural Fit and Diversity matters**
After a community profile is made, we also assist new members in choosing a community aligned with their cultural, religious and diversity interests. When one of our members moved to a different and largely skewed racial group neighbourhood, he faced both explicit and subtle racism growing up. Home seekers already take demographics into consideration, and our solution empowers aids home seekers in making a more informed decision from a cultural perspective. It also can support urban planning for community planners. We map diversity index scores, demographic data (generational and race), and the religious institutions and ammenities – aiding new home seekers in choosing their home.
The proverb "Birds of a feather flock together" describes how those of similar taste congregate in groups. However, in our world today, the importance of diversity and exposing oneself to different opinions and people is crucial to thrive in the workforce.
>
> Diversity is having a seat at the table. Inclusion is having a voice. And belonging is having that voice be heard. - Liz Fosslien
>
>
>
BlockFund believes that more than just price or transport convenience – diversity, belonging, and inclusion are key concepts in choosing a place to live.
BlockFund is a decentralised autonomous organisation (DAO), that pools community funds, engage the community, and allow transparent voting for projects.
## How we built it
We built and deployed the Decentralized Autonomous Organisation (DAO) smartcontract on two EVM-based blockchain: Rinkeby (Testnet) and xDAI.
We use AlchemyAPI as a node endpoint for our Rinkeby deployment for better data availability and consistency, while our xDAI deployment uses POA's official community node.
We deployed a React.js frontend for quick delivery of our application, leveraging Axios to asynchronously communicate with external libraries, OpenAPI to provide an intuitive Q&A feature promoting universal proposal comprehension, and Ant.Design/Sal for a modern, sleek, and animated user interface.
We use ethers.js to perform communication blockchain nodes, and it supports two main cryptocurrency wallets:
Burner wallet (our homebrew in-browser wallet made for easy user onboarding)
Metamask (a popular web3-enabled wallet for those who wants better security)
On top of that, our Community Learning Kits are made using ESRI ArcGIS storyboards for highly visual storytelling of geographic data.
Last but not least, we use Hardhat for smartcontract deployment automation.
**Here are some other technologies we used:**
For blockchain:
* Ethereum
* Solidity
* Hardhat
For front-end client:
* React.js (+ Hooks + Router)
* Axios — asynchronous communication with OpenAPI
* OpenAI GPT-3 — intuitive Q&A feature for universal proposal comprehension
* Sal — sleek animations
* Ant.Design — modern user interface system
For mapping:
* ArcGIS WebMap
* ArcGIS StoryMap
* ArcGIS-Rest-API
* Custom Functions
Datasets:
* 2010 US Census Data
* 2018 US Census Data
* Pima AZ Foliage Data
## Challenges we ran into
Our main challenge was in integrating ArcGIS API's in limited timeframe. As it was a new technology for us, we really had to crunch our brainpower.
On top of that, deploying a fully working website for other people to try takes a lot of effort to make sure that all of the integrations are also working beyond localhost.
## Accomplishments that we're proud of
* We have a live website!
* We launched to two different blockchains: xDAI and Rinkeby.
* React state management!
## What we learned
* We learned that working remotely with colleagues from 4 different timezones is challenging.
* Good React state management practices will safe a lot of time.
## What's next for BlockFund
* Explore ways how we can work with local communities to deploy this.
* Run more DAO experiments in smaller scope (family, small neighborhood, etc)
|
partial
|
## Inspiration
There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading).
The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts.
## What it does
The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server.
## How we built it
The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript.
## Challenges we ran into
* TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance
* It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device
## Accomplishments that we're proud of
* Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application
* Design of the glasses prototype
## What we learned
* How to setup a back-end web server using Flask
* How to facilitate socket communication between Flask and React
* How to setup a web server through local host tunneling using ngrok
* How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks
* How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end
## What's next for Synviz
* With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
|
## Inspiration
Meet Doctor DoDo. He is a medical practioner, and he has a PhD degree in EVERYTHING!
Dr. DoDo is here to take care of you, and make sure you do-do your homework!
Two main problems arise with virtual learning: It's hard to verify if somebody understands content without seeing their facial expressions/body language, and students tend to forget to take care of themselves, mentally and physically. At least we definitely did.
## What it does
Doctor DoDo is a chrome extension and web-browser "pet" that will teach you about the images or text you highlight on your screen (remember his many PhD's), while keeping watch of your mood (facial expressions), time spent studying, and he'll even remind you to fix your posture (tsk tsk)!
## How we built it
With the use of:
* ChatGPT-4
* Facial Emotion recognition
* Facial Verification recognition
* Voice recognition
* Flask
* Pose recognition with mediapipe
and Procreate for the art/animation,
## Challenges we ran into
1. The initial design was hard to navigate. We didn't know if we wanted him to mainly be a study buddy, include a pomodoro timer, etc.
2. Animation (do I need to say more)
3. Integration hell, backend and frontend were not connecting
## Accomplishments that we're proud of
We're proud of our creativity! But more proud of Dr. DoDo for graduating with a PhD in every branch of knowledge to ever exist.
## What we learned
1. How to integrate backend and frontend software within a limited time
2. Integrating more
3. Animation (do I need to say more again)
## What's next for Doctor DoDo
Here are some things we're adding to Dr. DoDo's future:
Complete summaries of webpages
|
## Inspiration
All three teammates had independently converged on an idea of glasses with subtitles for the world around you. After we realized the impracticality of the idea (how could you read subtitles an inch from your eye without technology that we didn't have access to?) we flipped it around: instead of subtitles (with built-in translation!) that only you could see for *everybody else*, what if you could have subtitles for *you* that everyone else could see? This way, others could understand what you were saying, breaking barriers of language, distance, and physical impairments. The subtitles needed to be big so that people could easily read them, and somewhere prominent so people you were conversing with could easily find them. We decided on having a large screen in the front of a shirt/hoodie, which comes with the benefits of wearable tech such as easy portability.
## What it does
The device has three main functions. The first is speech transcription in multiple languages, where what you say is turned into text and you can choose the language you're speaking in. The second is speech translation, which currently translates your transcribed speech into English. The final function is displaying subtitles, and your translated speech is displayed on the screen in the front of the wearable.
## How we built it
We took in audio input from a microphone connected to a Raspberry Pi 5, which sends packets of audio every 100 ms to the Google Cloud speech-to-text API, allowing for live near real-time subtitling. We then sent the transcribed text to the Google Cloud translate API to translate the text into English. We sent this translated text to a file, which was read from to create our display using pygame. Finally, we sewed all the components into a hoodie that we modified to become our wearable subtitle device!
## Challenges we ran into
There were no microphones, so we had to take a trip (on e-scooters!) to a nearby computer shop to buy microphones. We took one apart to be less bulky, desoldering and resoldering components in order to free the base components from the plastic encasing.
We had issues with 3D printing parts for different components: at one point our print and the entire 3D printer went missing with no one knowing where it went, and many of our ideas were too large for the 3D printers.
Since we attached everything to a hoodie, there were some issues with device placement and overheating. Our Raspberry Pi 5 reached 85 degrees C, and some adapters were broken due to device placement.
Finally, a persistent problem we had was using Google Cloud's API to switch between recording different languages. We couldn't find many helpful references online, and the entire process was very complicated.
## Accomplishments that we're proud of
We're proud of successfully transcribing text from audio from the taken-apart microphone. We were so proud, in fact, that we celebrated by going to get boba!
## What we learned
We learned four main lessons. The first and second were that the materials you have access to can significantly increase your possibilities or difficulty (having the 7" OLED display helped a lot) but that even given limited materials, you still have the ability to create (when we weren't able to get a microphone from the Hardware Hub, we went out and bought a microphone that was not suited for our purposes and took it apart to make it work for us). The third and fourth were that seemingly simple tasks can be very difficult and time-consuming to do (as we found in the Google Cloud's APIs for transcription and translation) but also that large, complex tasks can be broken down into simple doable bits (the entire project: we definitely couldn't have made it possible without everyone taking on little bits one at a time).
## What's next for Project Tee
In the future, we hope to make the wearable less bulky and more portable by having a flexible OLED display embedded in the shirt, and adding an alternative power source of solar panels. We also hope to support more languages in the future (we currently support five: English, Spanish, French, Mandarin, and Japanese) both to translate from and to, as well as a possible function to automatically detect what language a user is speaking. As the amount of language options increases, we will likely need an app or website as an option for people to change their language options more easily.
|
winning
|
## Inspiration
We wanted to make something that linked the virtual and real worlds, but in a quirky way. On our team we had people who wanted to build robots and people who wanted to make games, so we decided to combine the two.
## What it does
Our game portrays a robot (Todd) finding its way through obstacles that are only visible in one dimension of the game. It is a multiplayer endeavor where the first player is given the task to guide Todd remotely to his target. However, only the second player is aware of the various dangerous lava pits and moving hindrances that block Todd's path to his goal.
## How we built it
Todd was built with style, grace, but most of all an Arduino on top of a breadboard. On the underside of the breadboard, two continuous rotation servo motors & a USB battery allows Todd to travel in all directions.
Todd receives communications from a custom built Todd-Controller^TM that provides 4-way directional control via a pair of Bluetooth HC-05 Modules.
Our Todd-Controller^TM (built with another Arduino, four pull-down buttons, and a bluetooth module) then interfaces with Unity3D to move the virtual Todd around the game world.
## Challenges we ran into
The first challenge of the many that we ran into on this "arduinous" journey was having two arduinos send messages to each other via over the bluetooth wireless network. We had to manually configure the setting of the HC-05 modules by putting each into AT mode, setting one as the master and one as the slave, making sure the passwords and the default baud were the same, and then syncing the two with different code to echo messages back and forth.
The second challenge was to build Todd, the clean wiring of which proved to be rather difficult when trying to prevent the loose wires from hindering Todd's motion.
The third challenge was building the Unity app itself. Collision detection was an issue at times because if movements were imprecise or we were colliding at a weird corner our object would fly up in the air and cause very weird behavior. So, we resorted to restraining the movement of the player to certain axes. Additionally, making sure the scene looked nice by having good lighting and a pleasant camera view. We had to try out many different combinations until we decided that a top down view of the scene was the optimal choice. Because of the limited time, and we wanted the game to look good, we resorted to looking into free assets (models and textures only) and using them to our advantage.
The fourth challenge was establishing a clear communication between Unity and Arduino. We resorted to an interface that used the serial port of the computer to connect the controller Arduino with the unity engine. The challenge was the fact that Unity and the controller had to communicate strings by putting them through the same serial port. It was as if two people were using the same phone line for different calls. We had to make sure that when one was talking, the other one was listening and vice versa.
## Accomplishments that we're proud of
The biggest accomplishment from this project in our eyes, was the fact that, when virtual Todd encounters an object (such as a wall) in the virtual game world, real Todd stops.
Additionally, the fact that the margin of error between the real and virtual Todd's movements was lower than 3% significantly surpassed our original expectations of this project's accuracy and goes to show that our vision of having a real game with virtual obstacles .
## What we learned
We learned how complex integration is. It's easy to build self-sufficient parts, but their interactions introduce exponentially more problems. Communicating via Bluetooth between Arduino's and having Unity talk to a microcontroller via serial was a very educational experience.
## What's next for Todd: The Inter-dimensional Bot
Todd? When Todd escapes from this limiting world, he will enter a Hackathon and program his own Unity/Arduino-based mastery.
|
## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces.
|
## Inspiration
We, as all students do, frequently and unwillingly fall to the powers of procrastination. This invention is for when the little cute Pomodoro and Screen Time reminders are a tad too easy to ignore.
## What it does
The device sits in a predetermined area that you would not want to be in order to focus. For example beside your bed, on the couch, in front of your gaming PC/console. If it detects a person there, It will aim at you and fire projectiles.
## How we built it
We built it by integrating a variety of technologies. Firstly, in terms of the frontend, it works with with an Android app developed using the Qualcomm HDK 8450 which has autonomous controls such as connecting to the projectile gun, turning on and off. The app also takes care of the ML Computer Vision needed in order to both detect people and where they are via Google's ML Kit. This then sends this information wirelessly via Bluetooth to an Arduino which is hooked up to two motors that control the aiming and firing of the projectile. The angle at which the projectile launcher turns is approximated with the user sitting 50-100cm away.
## Challenges we ran into
We ran into multiple challenges during the project. Firstly, none of us had any experience developing an Android app and using an HDK8450, so we had a lot of ground to make up in order to start developing the app. Secondly, we found the Bluetooth module connection to be quite difficult to get working, as the official documentation seemed to be quite limited especially for beginners to Android development.
## Accomplishments that we're proud of
One thing we are extremely proud of is the number of different systems and devices we got working together smoothly. From Computer Vision, to Bluetooth Protocols, to Arduino Programming and Mechanical Design, this project brought together a whole variety of fields, and we are proud to have been able to cover all of those bases as smoothly as we did.
## What we learned
As beginners to Android development, we gained a plethora of knowledge on how to build, develop and deploy a working Android application. We also gained experience working with Arduinos, especially involving the communication aspects including sending and receiving information via Bluetooth. Finally we learned about deploying a working ML model in a solution of our own.
## What's next for Failure Management 101
We would like to add movement by putting the whole mechanism on wheels to allow it a greater degree of freedom. We also had plans for voice control, as well as plans for the robot to have access to your laptop in order to determine whether the user is on non-productive websites. Finally, in a from a more realistic and practical purpose, we could envision robots like these helping in patrolling/guard duty as an aid to policemen, although perhaps not firing paper projectiles anymore.
|
winning
|
Presentation Link: <https://docs.google.com/presentation/d/1_4Yy5c729_TXS8N55qw7Bi1yjCicuOIpnx2LxYniTlY/edit?usp=sharing>
SHORT DESCRIPTION (Cohere generated this)
Enjoy music from the good old days? your playlist will generate songs from your favourite year (e.g. 2010) and artist (e.g. Linkin Park)
## Inspiration
We all love listening to music on Spotify, but depending on the mood of the day, we want to listen to songs on different themes. Impressed by the cool natural language processing tech that Cohere offers, we decided to create Songify that uses Cohere to create Spotify playlists based on the user's request.
## What it does
The purpose of Songify is to make the process of discovering new music seamless and hopefully provide our users with some entertainment. The algorithm is not limited in search words so anything that Songify is prompted will generate a playlist whether it be for serious music interest or for laughs.
Songify uses a web based platform to collect user input which Cohere then scans and extracts keywords from. Cohere then sends those keywords to the Spotify API which looks for songs containing the data, creates a new playlist under the user's account and populates the songs into the playlist. Songify will then return a webpage with an embedded playlist where you can examine the songs that were added instantly.
## How we built it
The project revolved around 4 main tasks; Implementing the Spotify API, the Cohere API, creating a webpage and integrating our webpage and backend. Python was the language of choice since it supported the Spotify API, Cohere and Spotipy which extensively saved us time in learning to use Spotify's API. Our team then spent time learning about and executing our specific tasks and came together finally for the integration.
## Challenges we ran into
For most of our team, this was our first time working with Python, APIs and integrating front and back end code. Learning all these skills in the span of 3 days was extremely challenging and time consuming. The first hurdle that we had to overcome was learning to read API documentation. The documentation was very intimidating to look at and understanding the key concepts such as API keys, Authorizations, REST calls was very confusing at first. The learning process included watching countless YouTube videos, asking mentors and sponsors for help and hours of trial and error.
## Accomplishments that we're proud of
Although our project is not the most flashy, our team has a lot to be proud of. Creating a product with the limited knowledge we had and building an understanding of Python, APIs, integration and front end development in a tight time frame is an accomplishment to be celebrated. Our goal for this hackathon was to make a tangible product and we succeeded in that regard.
## What we learned
Working with Spotify's API provided a lot of insight on how companies store data and work with data. Through Songify, we learned that most Spotify information is stored in JSON objects spanning several hundred lines per song and several thousands for albums. Understanding the Authentication process was also a headache since it had many key details such as client ids, API keys, redirect addresses and scopes.
Flask was very challenging to tackle, since it was our first time dealing with virtual environments, extensive use of windows command prompt and new notations such as @app.route. Integrating Flask with our HTML skeleton and back end Python files was also difficult due to our limited knowledge in integration.
Hack Western was a very enriching experience for our team, exposing us to technologies we may not have encountered if not for this opportunity.
## What's next for HackWesternPlaylist
In the future, we hope to implement a search algorithm not only for names, but for artists, artist genres, and the ability to scrub other people's playlists that the user enjoys listening to. The appearance of our product is also suboptimal and cleaning up the front end of the site will make it more appealing to users. We believe that Songify has a lot of flexibility in terms of what it can grow into in the future and are excited to work on it in the future.
|
## Inspiration
Music is something inherently personal to each and every one of us - our favorite tracks accompany us through our highs and lows, through tough workouts and relaxing evenings. Our aim is to encourage and capture that feeling of discovering that new song you just can't stop listening to. Music is an authentic expression of ourselves, and the perfect way to [meet new people](https://www.lovethispic.com/uploaded_images/206094-When-You-Meet-Someone-With-The-Same-Music-Taste-As-You.jpg) without the clichés of the typical social media platforms we're all sick of. We're both very passionate about reviving the soul of social media, so we were very excited to hear about this track and work on this project!
## What it does
Spotify keeps tabs on the tracks you can't get enough of. Why not make that data work *for you*? With one simple login, ensemble matches you with others who share your musical ear. Using our *state-of-the-art* machine learning algorithms, we show you other users who we think you'd like based on both their and your music taste. Love their tracks? Follow them and stay tuned. ensemble is a new way to truly connect on a meaningful level in an age of countless unoriginal social media platforms.
## How we built it
We wanted a robust application that could handle the complexities of a social network, while also providing us with an extensive toolkit to build out all the features we envisioned. Our frontend is built using [React](https://reactjs.org), a powerful and well-supported web framework that gives us the flexibility to build with ease. We utilized supporting frontend technologies like Bootstrap, HTML, and CSS to help create an attractive UI, the key aspect of any social media. For the backend, we used [Django](https://www.djangoproject.com) and [Django Rest Framework](https://www.django-rest-framework.org) to build a secure API that our frontend can easily interact with. For our recommendation algorithm, we used scikit-learn and numpy to power our machine learning needs. Finally, we used PostgreSQL for our DBMS and Heroku for deployment.
## Challenges we ran into
As with most social media platforms, users are key. Given the *very short* nature of hackathons, it obviously isn't feasible to attract a large number of users for development purposes. However, we needed a way to have users available for testing. Since ensemble is based on Spotify accounts and the Spotify API, this proved to be non-trivial. We took advantage of the Spotify API's recommendations endpoint to generate pseudo-data that resembles what a real person would have as their top tracks. With a fake name generator, we created as many fake profiles as we needed to flesh out our recommendation algorithm.
## Accomplishments that we're proud of
Our application is fully ready to use—it has all of the necessary authentication, authorization, and persistent storage. While we'd love to add even more features, we focused on implementing the core ones in their current state (if you use Spotify, feel free to log in and try it out!). You can find the live version [here](https://ensemble-dev.herokuapp.com). Despite all of the hassle of the deployment process, it was very fulfilling to see what we created, live and ready to be used by anyone in the world.
We're also proud of what we've accomplished in general! It's been a challenging yet immensely fulfilling day-and-a-half of ideation, design, and coding. Looking back at what we were able to create during this short time span, we're proud to have something to show for all the effort we've put into it.
## What we learned
We both learned a lot from working on this project. It's been a fast-paced weekend of continuously pushing new changes and features, and in doing so, we sharpened our skills in both React and Django. Additionally, utilizing the Spotify API was something neither of us had done before, and we learned a lot about OAuth 2.0 and web authentication in general.
## What's next for ensemble
Working on this project was a lot of fun, and we'd both love to keep it going in the future. There are a ton of features that we thought out but didn't have the time to implement in this time span. For example, we'd love to implement a direct messaging system, so you can directly contact and discuss your favorite songs/artists with the people you follow. The GitHub repository readme also contains complete and detailed instructions on how to set up your development environment to run the code, if anyone is interested in trying it out. Thanks for reading!
|
## Inspiration
Our idea was inspired by our group's shared interest in musical composition, as well as our interests in AI models and their capabilites. The concept that inspired our project was: "*What if life had a soundtrack?*"
## What it does
AutOST generates and produces a constant stream of original live music designed to automatically adjust to and accompany any real-life scenario.
## How we built it
We built our project in python, using the Mido library to send note signals directly to FL studio, allowing us to play constant audio without a need to export to a file. The whole program is linked up to a live video feed that uses Groq AI's computer vision api to determine the mood of an image and adjust the audio accordingly.
## Challenges we ran into
The main challenge we faced in this project is the struggle that came with making the generated music not only sound coherent and good, but also have the capability to adjust according to parameters. Turns out that generating music mathematically is more difficult than it seems.
## Accomplishments that we're proud of
We're proud of the fact that our program's music sounds somewhat decent, and also that we were able to brainstorm a concept that (to our knowlege) has not really seen much experimentation.
## What we learned
We learned that music generation is much harder than we initially thought, and that AIs aren't all that great at understanding human emotions.
## What's next for AutOST
If we continue work on this project post-hackathon, the next steps would be to expand its capabilities for recieving input, allowing it to do all sorts of amazing things such as creating a dynamic soundtrack for video games, or integrating with smart headphones to create tailored background music that would allow users to feel as though they are living inside a movie.
|
partial
|
## Inspiration
As more and more blockchains transition to using Proof of Stake as their primary consensus mechanism, the importance of validators becomes more apparent. The security of entire digital economies, people's assets, and global currencies rely on the security of the chain, which at its core is guaranteed by the number of tokens that are staked by validators. These staked tokens not only come from validators but also from everyday users of the network. In the current system there is very little distinguishing between validators other than the APY that each provides and their name (a.k.a. their brand). We aim to solve this issue with Ptolemy by creating a reputation score that is tied to a validator's DID using data found both on and off chain.
This pain point was discovered as our club, being validators on many chains such as Evmos, wanted a way to earn more delegations through putting in more effort into pushing the community forward. After talking with other university blockchain clubs, we discovered that the space was seriously lacking the UI and data aggregation processes to correlate delegations with engagement and involvement in a community.
We confirmed this issue by realizing our shared experiences as users of these protocols: when deciding which validators to delegate our tokens to on Osmosis we really had no way of choosing between validators other than judging based on APY looking them up on Twitter to see what they did for the community.
## What it does
Ptolemy calculates a reputation score based on a number of factors and ties this score to validators on chain using Sonr's DID module. These factors include both on-chain and off-chain metrics. We fetch on-chain validator data Cosmoscan and assign each validator a reputation score based on number of blocks proposed, governance votes, amount of delegators, and voting power, and create and evaluate a Validator based on a mathematical formula that normalized data gives them a score between 0-5. Our project includes not only the equation to arrive at this score but also a web app to showcase what a delegation UI would look like when including this reputation score. We also include mock data that ties data from social media platforms to highlight a validator's engagement with the community, such as Reddit, Twitter, and Discord, although this carries less weight than other factors.
## How we built it
First, we started with a design doc, laying out all the features. Next, we built out the design in Figma, looking at different Defi protocols for inspiration. Then we started coding.
We built it using Sonr as our management system for DIDs, React, and Chakra for the front end, and the backend in GoLang.
## Challenges we ran into
Integrating the Sonr API was quite difficult, we had to hop on call with an Engineer from the team to work through the bug. We ended up having to use the GoLang API instead of the Flutr SDK. During the ideating phase, we had to figure out what off-chain data was useful for choosing between validators.
## Accomplishments that we're proud of
We are proud of learning a new technology stack from the ground up in the form of the Sonr DID system and integrating it into a much-needed application in the blockchain space. We are also proud of the fact that we focused on deeply understanding the validator reputation issue so that our solution would be comprehensive in its coverage.
## What we learned
We learned how to bring together diverse areas of software to build a product that requires so many different moving components. We also learned how to look through many sets of documentation and learn what we minimally needed to hack out what we wanted to build within the time frame. Lastly, we also learned to efficiently bring together these different components in one final product that justice to each of their individual complexities.
## What's next for Ptolemy
Ptolemy is named in honor of the eponymous 2nd Century scientist who generated a system to chart the world in the form of longitude/latitude which illuminated the geography world. In a similar way, we hope to bring more light to the decision making process of directing delegations. Beyond this hackathon, we want to include more important metrics such as validator downtime, jail time, slashing history, and history of APY over a certain time period. Given more time, we could have fetched this data from an indexing service similar to The Graph. We also want to flesh out the onboarding process for validators to include signing into different social media platforms so we can fetch data to determine their engagement with communities, rather than using mock data. A huge feature for the app that we didn't have time to build out was staking directly on our platform, which would have involved an integration with Keplr wallet and the staking contracts on each of the appchains that we chose.
Besides these staking related features, we also had many ideas to make the reputation score a bigger component of everyone's on chain identity. The idea of a reputation score has huge network effects in the sense that as more users and protocols use it, the more significance it holds. Imagine a future where lending protocols, DEXes, liquidity mining programs, etc. all take into account your on-chain reputation score to further align incentives by rewarding good actors and slashing malicious ones. As more protocols integrate it, the more power it holds and the more seriously users will manage their reputation score. Beyond this, we want to build out an API that also allows developers to integrate our score into their own decentralized apps.
All this is to work towards a future where Ptolemy will fully encapsulate the power of DID’s in order to create a more transparent world for users that are delegating their tokens.
Before launch, we need to stream in data from Twitter, Reddit, and Discord, rather than using mock data. We will also allow users to directly stake our platform. Then we need to integrate with different lending platforms to generate the Validator's "reputation-score" on-chain. Then we will launch on test net. Right now, we have the top 20 validators, moving forwards we will add more validators. We want to query, jail time, and slashing of validators in order to create a more comprehensive reputation score for the validator., Off-chain, we want to aggregate Discord, Reddit, Twitter, and community forum posts to see their contributions to the chain they are validating on. We also want to create an API that allows developers to use this aggregated data on their platform.
|
## 💡 Inspiration
The objective of our application is to devise an effective and efficient written transmission optimization scheme, by converting esoteric text into an exoteric format.
If you read the above sentence more than once and the word ‘huh?’ came to mind, then you got my point. Jargon causes a problem when you are talking to someone who doesn't understand it. Yet, we face obscure, vague texts every day - from ['text speak'](https://www.goodnewsnetwork.org/dad-admits-hilarious-texting-blunder-on-the-moth/) to T&C agreements.
The most notoriously difficult to understand texts are legal documents, such as contracts or deeds. However, making legal language more straightforward would help people understand their rights better, be less susceptible to being punished or not being able to benefit from their entitled rights.
Introducing simpl.ai - A website application that uses NLP and Artificial Intelligence to recognize difficult to understand text and rephrase them with easy-to-understand language!
## 🔍 What it does
simpl.ai intelligently simplifies difficult text for faster comprehension. Users can send a PDF file of the document they are struggling to understand. They can select the exact sentences that are hard to read, and our NLP-model recognizes what elements make it tough. You'll love simpl.ai's clear, straightforward restatements - they change to match the original word or phrase's part of speech/verb tense/form, so they make sense!
## ⚙️ Our Tech Stack
[](https://postimg.cc/gr2ZqkpW)
**Frontend:** We created the client side of our web app using React.js and JSX based on a high-fidelity prototype we created using Figma. Our components are styled using MaterialUI Library, and Intelllex's react-pdf package for rendering PDF documents within the app.
**Backend:** Python! The magic behind the scenes is powered by a combination of fastAPI, TensorFlow (TF), Torch and Cohere. Although we are newbies to the world of AI (NLP), we used a BART model and TF to create a working model that detects difficult-to-understand text! We used the following [dataset](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/complex-word-identification-dataset/cwishareddataset.zip) from Stanford University to train our [model](http://nlp.stanford.edu/data/glove.6B.zip)- It's based on several interviews conducted with non-native English speakers, where they were tasked to identify difficult words and simpler synonyms for them. Finally, we used Cohere to rephrase the sentence and ensure it makes sense!
## 🚧 Challenges we ran into
This hackathon was filled with many challenges - but here are some of the most notable ones:
* We purposely choose an AI area where we didn't know too much in (NLP, TensorFlow, CohereAPI), which was a challenging and humbling experience. We faced several compatibility issues with TensorFlow when trying to deploy the server. We decided to go with AWS Platform after a couple of hours of trying to figure out Kubernetes 😅
* Finding a dataset that suited our needs! If there were no time constraints, we would have loved to develop a dataset that is more focused on addressing tacky legal and technical language. Since that was not the case, we made do with a database that enabled us to produce a proof-of-concept.
## ✔️ Accomplishments that we're proud of
* Creating a fully-functioning app with bi-directional communication between the AI server and the client.
* Working with NLP, despite having no prior experience or knowledge. The learning curve was immense!
* Able to come together as a team and move forward, despite all the challenges we faced together!
## 📚 What we learned
We learned so much in terms of the technical areas; using machine learning and having to pivot from one software to the other, state management and PDF rendering in React.
## 🔭 What's next for simpl.ai!
**1. Support Multilingual Documents.** The ability to translate documents and provide a simplified version in their desired language. We would use [IBM Watson's Language Translator API](https://cloud.ibm.com/apidocs/language-translator?code=node)
**2. URL Parameter** Currently, we are able to simplify text from a PDF, but we would like to be able to do the same for websites.
* Simplify legal jargon in T&C agreements to better understand what permissions and rights they are giving an application!
* We hope to extend this service as a Chrome Extension for easier access to the users.
**3. Relevant Datasets** We would like to expand our current model's capabilities to better understand legal jargon, technical documentation etc. by feeding it keywords in these areas.
|
## Inspiration
Our core idea revolves around the concept of providing private layers for Large Language Models (LLMs). We believe that privacy is essential in today's data-driven world, and centralized solutions are not sufficient. Our inspiration stems from envisioning a future where anyone can deploy their own Anonymization node, share it in a smart contract, and give users the freedom to choose among them.
## What it does
Our project demonstrates the power of decentralized Anonymization nodes for LLMs. We have deployed different layers using OpenAI and Cohere, one focusing on privacy and the other not. Through our front-end interface, we showcase how user experiences can vary based on their choice of Anonymization module.
In the future, we envision these nodes to be highly customizable, allowing each Anonymization node to incorporate Natural Language Processing (NLP) modules for extracting sensitive inputs from prompts, making the process even more secure and user-friendly.
## How we built it
Our project is built on a decentralized architecture. Here's a high-level overview of how it works:
1. **User Interaction**: Users input their queries into an LLM-enabled device.
2. **Deploy Anonymization Node**: The query is sent to a Custom node (based on their reputation), where identifiers and sensitive information are further anonymized.
3. **LLM Processing**: The anonymized query is forwarded to the LLM provider for processing.
4. **Data Enrichment (In Future)**: The LLM provider sends the response back to the custom node. The node then injects the sensitive information back into the response.
5. **User Experience**: The enriched response is sent back to the user's device, ensuring privacy and a seamless user experience.
## Challenges we ran into
While building our decentralized Anonymization system, we faced various technical challenges, including:
1. Figuring out a way to use deployed smart contract as a registry for available nodes.
2. Connecting all three components (backend, frontend, private layer) in a manner that does not hurt user experience.
## Accomplishments that we're proud of
* Successfully deploying decentralized Anonymization nodes.
* Demonstrating how user experiences can be enhanced with privacy-focused solutions.
* Designing a system that can adapt and evolve with future NLP modules.
## What we learned
Throughout this project, we gained valuable insights into decentralized systems, smart contracts, and the importance of user privacy. We also learned how to work with APIs provided by two LLM giants (cohere and openai)
## What's next for ChainCloak
The future of ChainCloak looks promising. We plan to:
* Expand the range of Anonymization modules and LLM providers.
* Enhance the security and customization options for Anonymization nodes.
* Collaborate with the community to build a robust ecosystem of privacy-focused solutions.
* Continue exploring new technologies and innovations in the field of decentralized AI and privacy.
We are excited about the potential impact of ChainCloak in ensuring privacy in the era of AI-powered language models.
|
winning
|
## Inspiration
I wanted to create a game that had a unique premise and controls.
## What it does
A puzzle game that has the player control four characters at once.
## How I built it
I used Python along with Pygame to complete this project.
## Challenges I ran into
At first I attempted to use Unity to complete this project, however I was not proficient enough in it to implement the complicated logic I had planned for the game.
## Accomplishments that I'm proud of
Designing the levels, creating a unique concept for a game.
## What I learned
Re-learned the intricacies of Python and Pygame.
## What's next for Splitter
More levels and features to challenge the player.
|
## Inspiration
We wanted to build a shooter that many friends could play together. We didn't want to settle for something that was just functional, so we added the craziest game mechanic we could think of to maximize the number of problems we would run into: a map that has no up or down, only forward. The aesthetic of the game is based on Minecraft (a game I admit I have never played).
## What it does
The game can host up to 5 players on a local network. Using the keyboard and the mouse on your computer, you can walk around an environment shaped like a giant cube covered in forest, and shoot bolts of energy at your friends. When you reach the threshold of the next plane of the cube, a simple command re-orients your character such that your gravity vector is perpendicular to the next plane, and you can move onwards. The last player standing wins.
## How we built it
First we spent a few (many) hours learning the skills necessary. My teammate familiarized themself with a plethora of Unity functions in order to code the game mechanics we wanted. I'm a pretty decent 3D modeler, but I've never used Maya before and I've never animated a bipedal character. I spent a long while adjusting myself to Maya, and learning how the Mecanim animation system of Unity functions. Once we had the basics, we started working on respective elements: my teammate the gravity transitions and the networking, and myself the character model and animations. Later we combined our work and built up the 3D environment and kept adding features and debugging until the game was playable.
## Challenges we ran into
The gravity transitions where especially challenging. Among a panoply of other bugs that individually took hours to work through or around, the gravity transitions where not fully functional until more than a day into the project. We took a break from work and brainstormed, and we came up with a simpler code structure to make the transition work. We were delighted when we walked all up and around the inside of our cube-map for the first time without our character flailing and falling wildly.
## Accomplishments that we're proud of
Besides the motion capture for the animations and the textures for the model, we built a fully functional, multiplayer shooter with a complex, one-of-a-kind gameplay mechanic. It took 36 hours, and we are proud of going from start to finish without giving up.
## What we learned
Besides the myriad of new skills we picked up, we learned how valuable a hackathon can be. It is an educational experience nothing like a classroom. Nobody chooses what we are going to learn; we choose what we want to learn by chasing what we want to accomplish. By chasing something ambitious, we inevitably run into problems that force us to develop new methodologies and techniques. We realized that a hackathon is special because it's a constant cycle of progress, obstacles, learning, and progress. Progress stacks asymptotically towards a goal until time is up and it's time to show our stuff.
## What's next for Gravity First
The next feature we are dying to add is randomized terrain. We built the environment using prefabricated components that I built in Maya, which we arranged in what we thought was an interesting and challenging arrangement for gameplay. Next, we want every game to have a different, unpredictable six-sided map by randomly laying out the pre-fabs according to certain parameters..
|
## Inspiration
Old-school text adventure games.
## What it does
You play as Jack and get to make choices to advance the adventure. There are several possible paths to the story and you will come across obstacles and games along the way.
## How I built it
Using Python.
## Challenges I ran into
Trying to use Tkinter to create a user interface. We ended up just doing in-text graphics.
## Accomplishments that I'm proud of
This is two of the team members' first Python project. We are proud that we made working code.
## What I learned
Catherine: learned how to code in Python (mainly the syntax)
Jennifer: how to organize code so that it produces a functioning game
Aria: finally learned how to use GitHub!
## What's next for The Text Adventure of Jack
Adding user interface and sound to make the game more visually and aurally immersive.
|
losing
|
## Inspiration
At Save A Bite, we connect local restaurants, grocery stores and farmers with those in need, offering surplus food at reduced prices.
## What it does
Save A Bite connects local restaurants and grocery stores with individuals facing food insecurity, allowing users to discover and reserve surplus food at reduced prices. By facilitating the redistribution of excess food, we reduce waste and promote sustainability while ensuring that everyone has access to delicious, healthy meals. Our platform empowers communities to combat hunger and support local businesses, all while aligning with the UN's Sustainable Development Goals.
## How we built it
We built Save-A-Bite using Django for a robust backend and Auth0 for secure user authentication. Our team integrated real-time data handling with responsive design principles, ensuring a seamless user experience. This combination empowers us to effectively combat food waste.
## Challenges we ran into
The most challenging part was setting up the user authentication.
## Accomplishments that we're proud of
We're proud that we pushed through and successfully set up user authentication and created an avenue for our partners to dynamically list their products for sale without a need to constantly modify the front end.
## What we learned
We learned teamwork.
## What's next for Save-A-Bite
Save-A-Bite intends to integrate a Volunteer dashboard, where people looking to give back to their community could offer delivery services and further reduce the cost of food for our dear shoppers.
|
## Inspiration
There were times when I, my household, or my club had much leftover food that we were able to give away to students. There were also times when we were not able to give it away. :( Our team built a web application that makes it easy for community members with surplus food to give it away to someone in need (and in close proximity). This technology helps anyone fight against food waste and empower another with perfectly delicious food!
## What it does
Our website allows users to publish their donatable foods into an online database, available for other users to view. Users who are interested in retrieving those donations are able to mark their interest by deleting those food items off the database.
## How we built it
We initially brainstormed the concept of food donation and broke it down into two perspectives, the giver and the recipient. For each side, we wrote out the basic functionalities. The giver needs to display the food/meal and get notified if someone is interested in accepting the offer. The recipient can see what food is available. We created mockups of the UI using Figma. As we started brainstorming more nice-to-have details (allergies, notifications, etc), we also looked into using Firebase Authentication and Firestore Database to store user and food information. Following a logical workflow, we added document fields to include other features to be displayed on the dashboard. Thus, the user can view and post surplus food that was available on the dashboard.
## Challenges we ran into
Though it was easy thinking of more features, implementation was much harder. As a team composed of beginner hackers, we faced a steep learning curve with React, Javascript, and even Git.
## Accomplishments that we're proud of
We are super proud to have developed a rudimentary product from idea conception to implementation as a completely novice team. :) Fun fact: 2/3 of us are bio/env. sci. majors.
## What we learned
We learned to use online APIs, to work as a team, to merge (many) Git conflicts, and to follow the workflow required of real-world applications.
## What's next for FeedYourNeighbor
Future implementation additions include using geolocation data to constrain food choices to nearby donations, enabling interactivity between giver and recipient, and hiding the addition and removal of food items on the list behind a level of abstraction.
|
## Inspiration
With billions of tons of food waste occurring in Canada every year. We knew that there needs to exist a cost-effective way to reduce food waste that can empower restaurant owners to make more eco-conscious decisions while also incentivizing consumers to choose more environmentally-friendly food options.
## What it does
Re-fresh is a two-pronged system that allows users to search for food from restaurants that would otherwise go to waste at a lower price than normal. On the restaurant side, we provide a platform to track and analyze inventory in a way that allows restaurants to better manage their requisitions for produce so that they do not generate any extra waste and they can ensure profits are not being thrown away.
## How we built it
For the backend portion of the app, we utilized cockroachDB in python and javascript as well as React Native for the user mobile app and the enterprise web application. To ensure maximum protection of our user data, we used SHA256 encryption to encrypt sensitive user information such as usernames and password.
## Challenges we ran into
Due to the lack of adequate documentation as well as a plethora of integration issues with react.js and node, cockroachDB was a difficult framework to work with. Other issues we ran into were some problems on the frontend with utilizing chart.js for displaying graphical representations of enterprise data.
## Accomplishments that we're proud of
We are proud of the end design of our mobile app and web application. Our team are not native web developers so it was a unique experience stepping out of our comfort zone and getting to try new frameworks and overall we are happy with what we learned as well as how we were able to utilize our brand understanding of programming principles to create this project.
## What we learned
We learned more about web development than what we knew before. We also learned that despite the design-oriented nature of frontend development there are many technical hurdles to go through when creating a full stack application and that there is a wide array of different frameworks and APIs that are useful in developing web applications.
## What's next for Re-Fresh
The next step for Re-Fresh is restructuring the backend architecture to allow ease of scalability for future development as well as hopefully being able to publish it and attract a customer-base.
|
losing
|
Duet's music generation revolutionizes how we approach music therapy. We capture real-time brainwave data using Emotiv EEG technology, translating it into dynamic, personalized soundscapes live. Our platform, backed by machine learning, classifies emotional states and generates adaptive music that evolves with your mind. We are all intrinsically creative, but some—whether language or developmental barriers—struggle to convey it. We’re not just creating music; we’re using the intersection of art, neuroscience, and technology to let your inner mind shine.
## About the project
**Inspiration**
Duet revolutionizes the way children with developmental disabilities—approximately 1 in 10 in the United States—express their creativity through music by harnessing EEG technology to translate brainwaves into personalized musical experiences.
Daniel and Justin have extensive experience teaching music to children, but working with those who have developmental disabilities presents unique challenges:
1. Identifying and adapting resources for non-verbal and special needs students.
2. Integrating music therapy principles into lessons to foster creativity.
3. Encouraging improvisation to facilitate emotional expression.
4. Navigating the complexities of individual accessibility needs.
Unfortunately, many children are left without the tools they need to communicate and express themselves creatively. That's where Duet comes in. By utilizing EEG technology, we aim to transform the way these children interact with music, giving them a voice and a means to share their feelings.
At Duet, we are committed to making music an inclusive experience for all, ensuring that every child—and anyone who struggles to express themselves—has the opportunity to convey their true creative self!
**What it does:**
1. Wear an EEG
2. Experience your brain waves as music! Focus and relaxation levels will change how fast/exciting vs. slow/relaxing the music is.
**How we built it:**
We started off by experimenting with Emotiv’s EEGs — devices that feed a stream of brain wave activity in real time! After trying it out on ourselves, the CalHacks stuffed bear, and the Ariana Grande cutout in the movie theater, we dove into coding. We built the backend in Python, leveraging the Cortex library that allowed us to communicate with the EEGs. For our database, we decided on SingleStore for its low latency, real-time applications, since our goal was to ultimately be able to process and display the brain wave information live on our frontend.
Traditional live music is done procedurally, with rules manually fixed by the developer to decide what to generate. On the other hand, existing AI music generators often generate sounds through diffusion-like models and pre-set prompts. However, we wanted to take a completely new approach — what if we could have an AI be a live “composer”, where it decided based on the previous few seconds of live emotional data, a list of available instruments it can select to “play”, and what it previously generated to compose the next few seconds of music? This way, we could have live AI music generation (which, to our knowledge, does not exist yet). Powered by Google’s Gemini LLM, we crafted a prompt that would do just that — and it turned out to be not too shabby!
To play our AI-generated scores live, we used Sonic Pi, a Ruby-based library that specializes in live music generation (think DJing in code). We fed this and our brain wave data to a frontend built in Next.js to display the brain waves from the EEG and sound spectrum from our audio that highlight the correlation between them.
**Challenges:**
Our biggest challenge was coming up with a way to generate live music with AI. We originally thought it was impossible and that the tech wasn’t “there” yet — we couldn’t find anything online about it, and even spent hours thinking about how to pivot to another idea that we could use our EEGs with.
However, we eventually pushed through and came up with a completely new method of doing live AI music generation that, to our knowledge, doesn’t exist anywhere else! It was most of our first times working with this type of hardware, and we ran into many issues with getting it to connect properly to our computers — but in the end, we got everything to run smoothly, so it was a huge feat for us to make it all work!
**What’s next for Duet?**
Music therapy is on the rise – and Duet aims to harness this momentum by integrating EEG technology to facilitate emotional expression through music. With a projected growth rate of 15.59% in the music therapy sector, our mission is to empower kids and individuals through personalized musical experiences. We plan to implement our programs in schools across the states, providing students with a unique platform to express their emotions creatively. By partnering with EEG companies, we’ll ensure access to the latest technology, enhancing the therapeutic impact of our programs. Duet gives everyone a voice to express emotions and ideas that transcend words, and we are committed to making this future a reality!
**Built with:**
* Emotiv EEG headset
* SingleStore real-time database
* Python
* Google Gemini
* Sonic Pi (Ruby library)
* Next.js
|
# Links
Youtube: <https://youtu.be/VVfNrY3ot7Y>
Vimeo: <https://vimeo.com/506690155>
# Soundtrack
Emotions and music meet to give a unique listening experience where the songs change to match your mood in real time.
## Inspiration
The last few months haven't been easy for any of us. We're isolated and getting stuck in the same routines. We wanted to build something that would add some excitement and fun back to life, and help people's mental health along the way.
Music is something that universally brings people together and lifts us up, but it's imperfect. We listen to our same favourite songs and it can be hard to find something that fits your mood. You can spend minutes just trying to find a song to listen to.
What if we could simplify the process?
## What it does
Soundtrack changes the music to match people's mood in real time. It introduces them to new songs, automates the song selection process, brings some excitement to people's lives, all in a fun and interactive way.
Music has a powerful effect on our mood. We choose new songs to help steer the user towards being calm or happy, subtly helping their mental health in a relaxed and fun way that people will want to use.
We capture video from the user's webcam, feed it into a model that can predict emotions, generate an appropriate target tag, and use that target tag with Spotify's API to find and play music that fits.
If someone is happy, we play upbeat, "dance-y" music. If they're sad, we play soft instrumental music. If they're angry, we play heavy songs. If they're neutral, we don't change anything.
## How we did it
We used Python with OpenCV and Keras libraries as well as Spotify's API.
1. Authenticate with Spotify and connect to the user's account.
2. Read webcam.
3. Analyze the webcam footage with openCV and a Keras model to recognize the current emotion.
4. If the emotion lasts long enough, send Spotify's search API an appropriate query and add it to the user's queue.
5. Play the next song (with fade out/in).
6. Repeat 2-5.
For the web app component, we used Flask and tried to use Google Cloud Platform with mixed success. The app can be run locally but we're still working out some bugs with hosting it online.
## Challenges we ran into
We tried to host it in a web app and got it running locally with Flask, but had some problems connecting it with Google Cloud Platform.
Making calls to the Spotify API pauses the video. Reducing the calls to the API helped (faster fade in and out between songs).
We tried to recognize a hand gesture to skip a song, but ran into some trouble combining that with other parts of our project, and finding decent models.
## Accomplishments that we're proud of
* Making a fun app with new tools!
* Connecting different pieces in a unique way.
* We got to try out computer vision in a practical way.
## What we learned
How to use the OpenCV and Keras libraries, and how to use Spotify's API.
## What's next for Soundtrack
* Connecting it fully as a web app so that more people can use it
* Allowing for a wider range of emotions
* User customization
* Gesture support
|
### 💭 Inspiration
We realized the need for an on-demand snow removal service after helping an elderly neighbor clean her steps and driveway. The idea came to us as we saw the struggle and frustration she was having due to the overwhelming amount of snow that hadn't been cleaned which was preventing her from leaving her house.
### 📱 What it does
Our web app provides a streamlined and efficient method for addressing snow removal needs for driveways, sidewalks, steps, cars, and other areas surrounding one's property. It is particularly advantageous for people who simply just don't have time, older adults, those with disabilities, or those incapable of removing snow themselves. Additionally, it serves as a dependable alternative during instances of high demand and snow removal truck delays. The on-demand nature of our service offers convenience, adaptability, and cost-efficiency to our clients, addressing their snow removal needs in a timely manner.
### 🎯 Accomplishments that we’re proud of
In just 24 hours, we managed to build up an idea into a fully functional demo that we're extremely proud of. Despite encountering obstacles, and not being able to add every feature we wanted to, we had a blast working together as a team, we had great communication and planning skills which effectively helped us find solutions when blocked, and ultimately learned the value of teamwork.
### 🚧 Challenges we ran into
* Adding a payment processing system
* Integrating the Mapbox API
* Designing UI/UX that flows nicely with our idea
### 💡 What we learned
We learned that sleep is important, and we also learned that teamwork is extremely valuable to complete a hackathon within 24 hours. Team problem-solving and task-assigning were the reasons we were able to complete our app.
### 🔮 What’s next for Snowmate
In the future, we plan on improving our Web app by integrating helpful features including weather alerts, which would send notifications to users when snow is forecasted. This will help them to take action in advance and avoid getting stuck in the snow. We would also like to include customizable service options, which would offer a range of service options and the ability for users to customize their service based on their specific needs.
### 🔨 Built with
NodeJS and Typescript for the Backend, React Framework as Frontend with Typescript, hosted on Google Cloud.
|
winning
|
## Inspiration
Americans waste about 425 beverage containers per capita per year in landfill, litter, etc. Bottles are usually replaced with cans and bottles made from virgin materials which are more energy-intensive than recycled materials. This causes emissions of a host of toxics to the air and water and increases greenhouse gas emissions.
The US recycling rate is about 33% while that in stats what have container deposit laws have a 70% average rate of beverage recycling rate. This is a significant change in the amount of harm we do to the planet.
While some states already have a program for exchanging cans for cash, EarthCent brings some incentive to states to make this a program and something available. Eventually when this software gets accurate enough, there will not be as much labor needed to make this happen.
## What it does
The webapp allows a GUI for the user to capture an image of their item in real time. The EarthCents image recognizer recognizes the users bottles and dispenses physical change through our change dispenser. The webapp then prints a success or failure message to the user.
## How we built it
Convolutional Neural Networks were used to scan the image to recognize cans and bottles.
Frontend and Flask presents a UI as well as processes user data.
The Arduino is connected up to the Flask backend and responds with a pair of angle controlled servos to spit out coins.
Change dispenser: The change dispenser is built from a cardboard box with multiple structural layers to keep the Servos in place. The Arduino board is attached to the back and is connected to the Servos by a hole in the cardboard box.
## Challenges we ran into
Software: Our biggest challenge was connect the image file from the HTML page to the Flask backend for processing through a TensoFlow model. Flask was also a challenge since complex use of it was new to us.
Hardware: Building the cardboard box for the coin dispenser was quite difficult. We also had to adapt the Servos with the Arduino so that the coins can be successfully spit out.
## Accomplishments that we're proud of
With very little tools, we could build with hardware a container for coins, a web app, and artificial intelligence all within 36 hours. This project is also very well rounded (hardware, software, design, web development) and let us learn a lot about connecting everything together.
## What we learned
We learned about Arduino/hardware hacking. We learned about the pros/cons of Flask vs. using something like Node.js. In general, there was a lot of light shed on the connectivity of all the elements in this project. We both had skills here and there, but this project brought it all together. Learned how to better work together and manage our time effectively through the weekend to achieve as much as possible without being too overly ambitious.
## What's next for EarthCents
EarthCents could deposit Cryptocurrency/venmo etc and hold more coins. If this will be used, we would want to connect it to a weight to ensure that the user exchanged their can/bottle. More precise recognition.
|
## Inspiration
As university students, we and our peers have found that our garbage and recycling have not been taken by the garbage truck for some unknown reason. They give us papers or stickers with warnings, but these get lost in the wind, chewed up by animals, or destroyed because of the weather. For homeowners or residents, the lack of communication is frustrating because we want our garbage to be taken away and we don't know why it wasn't. For garbage disposal workers, the lack of communication is detrimental because residents do not know what to fix for the next time.
## What it does
This app allows garbage disposal employees to communicate to residents about what was incorrect with how the garbage and recycling are set out on the street. Through a checklist format, employees can select the various wrongs, which are then compiled into an email and sent to the house's residents.
## How we built it
The team built this by using a Python package called **Kivy** that allowed us to create a GUI interface that can then be packaged into an iOS or Android app.
## Challenges we ran into
The greatest challenge we faced was the learning curve that arrived when beginning to code the app. All team members had never worked on creating an app, or with back-end and front-end coding. However, it was an excellent day of learning.
## Accomplishments that we're proud of
The team is proud of having a working user interface to present. We are also proud of our easy to interactive and aesthetic UI/UX design.
## What we learned
We learned skills in front-end and back-end coding. We also furthered our skills in Python by using a new library, Kivy. We gained skills in teamwork and collaboration.
## What's next for Waste Notify
Further steps for Waste Notify would likely involve collecting data from Utilities Kingston and the city. It would also require more back-end coding to set up these databases and ensure that data is secure. Our target area was University District in Kingston, however, a further application of this could be expanding the geographical location. However, the biggest next step is adding a few APIs for weather, maps and schedule.
|
## Inspiration
We were inspired to build this project after accumalating trash in our workspace during the first few hours of this hackathon.
## What it does
We developed an app which you can upload or take a photo of any disposable item and it will tell you what kind of disposable it is and the action you should take to dispose of it.
## How we built it
We built this app using React Native for the frontend, Python Flask for the backend and we made use of a Hugging Face waste image classification model to process our image data.
Model Link: [link]<https://huggingface.co/watersplash/waste-classification>
## Challenges we ran into
We ran into trouble when actually trying to use this Hugging Face image classification model as we were unfamiliar with Hugging Face technologies and the 'Transformers' Python Library it relied on. Our main issue was something that took us a lot of time to realize - the dependencies and other Python libraries needed to import the Transformers module were rather specific. Additionally, we ran into alot of difficulty trying to connect our backend in Python Flask to our front end in React Native - we were unable to transfer our image data correctly between the front end and back end.
## Accomplishments that we're proud of
As we had many challenges, we are proud of the ways we overcame these challenges in the given time constraint. We overcame our challenge of being unable to use the Transformers library by using a Python virtual environment for our project which allowed us to install the specific Python version and libraries needed for this technology. Then, we overcame the challenge of moving data between our frontend and backend by learning more about Flask and React Native - technologies which were rather new for us - and we were eventually led to a library called Axios which we used to take care of this issue.
## What we learned
We learned about Python Virtual Environments, the how to use AI models from the Hugging Face Transformers Python Library, and how to use React Native/Python Flask and how to connect them.
## What's next for Trash Ninja
For further improvement of our app, we could use Google Maps or some other Map API to provide users with the option to view their nearest trash emptying, clothing donation or battery disposal sites.
|
winning
|
## Lejr
**Introduction**
A web application that allows you to track how much money your friends owe you, and after your friend accepts your request of paying you back, the app will directly deposit the money into your bank account.
**How we did it**
We built the website using Interac's Public API and MongoDB hosted by MLab; the website is hosted on Heroku. Our Node.js/Express backend is also acting as a REST API for our Android application.
**Inspiration**
Our friends keep forgetting to pay us back, and we're uncomfortable with pestering them so we thought of the idea to make payment requests simple and quick by using the Interac API along with a Node.js backend.
|
## Inspiration
Have you ever gone on a nice dinner out with friends, only to find that the group is too big for your server to split the bills according to each person's order? Someone inevitably decides to pay for the whole group and asks everyone to pay them back afterwards, but this doesn't always happen right away. When people forget to pay their friends back, it becomes somewhat awkward to bring up...
Enter Encountability, our cash transfer app!
## What it does
Encountability was created as an alternative to current cash transfer mechanisms, such as Interac e-transfer, that are somewhat clunky at best and inconvenient at worst - it sucks when the e-transfers don't arrive immediately and you and the person you're buying stuff from on Facebook Marketplace have to stand there awkwardly shuffling your feet and praying that the autodeposit email arrives soon. You can add friends to the app and send them cash (or request cash of your own) just by navigating to their profile on the app and sending a message in seconds! The app also reminds you of money you owe to any friends you might have on the app, ensuring that you don't forget to pay them back (especially if they shouldered everyone's bill last time you went out) and you spare them the awkwardness of having to remind you that you owe them some cash.
## How we built it
We built the backend in Python and Flask, and used CockroachDB for the database. The RBC Money Transfer API was also used for the project. For the frontend, a combination of HTML, CSS, and Javascript was used.
## Challenges we ran into
The name Encountability is a portmanteau of "encounter" and "accountability"; this was because we originally envisioned an RPG-style app where dinner bills that needed to be split could be treated like boss monsters and "defeated" by gathering a party of your friends and splitting the bill amongst yourselves easily. Time constraints were in full force this weekend, and we had to cut down on some of our more ambitious planned features after it became evident that there would not be enough time to accomplish everything we wanted. There were some difficulties with learning the techniques and tools necessary to integrate frontend and backend as well, but we pushed through and created something functional in the end!
## Accomplishments that we're proud of
Despite the hurdles and the compromises (and the time constraints... and the steep learning curve...) we were able to create something functional, with a prototype that shows how we envision the app to work and look!
## What we learned
* databases can be fiddly, but when they work, it's a beautiful thing!
* Sanity Walks™ are an essential part of the hackathon experience
* so are 30-min naps
## What's next for encountability
We'd like to connect it to bank accounts directly next time, just like we originally intended! It would also be nice to fully implement the automatic transaction-splitting feature of the app next time, as well as the more social aspects of the app.
|
## Inspiration
When struggling to learn HTML, and basic web development. The tools provided by browsers like google chrome were hidden making hard to learn of its existence. As an avid gamer, we thought that it would be a great idea to create a game involving the inspect element tool provided by browsers so that more people could learn of this nifty feature, and start their own hacks.
## What it does
The project is a series of small puzzle games that are reliant on the user to modify the webpage DOM to be able to complete. When the user reaches the objective, they are automatically redirected to the next puzzle to solve.
## How we built it
We used a game engine called craftyjs to run the game as DOM elements. These elements could be deleted and an event would be triggered so that we could handle any DOM changes.
## Challenges we ran into
Catching DOM changes from inspect element is incredibly difficult. Working with craftyjs which is in version 0.7.1 and not released therefore some built-ins e.g. collision detection, is not fully supported. Handling various events such as adding and deleting elements instead of recursively creating a ton of things recursively.
## Accomplishments that we're proud of
EVERYTHING
## What we learned
Javascript was not designed to run as a game engine with DOM elements, and modifying anything has been a struggle. We learned that canvases are black boxes and are impossible to interact with through DOM manipulation.
## What's next for We haven't thought that far yet
You give us too much credit.
But we have thought that far. We would love to do more with the inspect element tool, and in the future, if we could get support from one of the major browsers, we would love to add more puzzles based on tools provided by the inspect element option.
|
partial
|
## Inspiration
The time we spend on computers is increasing exponentially, and we have stored much personal and sensitive information online for convenience. However, most of the companies and agencies that we trust to handle and store our data are very prone to data breaches. Blockchain is created to eliminate centralized trust and provide us with a brand new way of social interaction. All the participants in a blockchain do not have to trust each other. They only have to trust the math behind it.
In Hashy, we use revolutionary ZK technology to protect the user's identity and privacy. Without revealing any sensitive information about our users, parties can reach an agreement.
## What it does
Hashy asks the verifier for what kind of information they want to verify. We then ask the user to input their data such as ID and age for verification. The whole verification process takes place in our ZK circuit, which means no party can see what the user has inputted. In the end, the verifier only needs to know whether the user has passed the condition that they set. We store the hash result of the user's input into our merkle tree for future verification.
## How we built it
We used plonky2 to create our ZK circuit and used javascript to create our merkle tree to store and verify the data from our user.
## Challenges we ran into
We didn't know how to write any ZK circuit before we came, so we have to literally teach ourselves how to write ZK circuit using plonky2. And we had to find a working merkle tree implementation.
## Accomplishments that we're proud of
We are proud that we have a working prototype and a working ZK circuit!
## What we learned
This is our first time attending hackathon, and we finally learned how to apply what we learned into creating a new and exciting project.
## What's next for Hashy
Add more functionalities so that there are more general use cases.
|
## Inspiration
As more legal COVID-19 vaccines are produced, more fraudulent vaccines are being sold to unknowing customers. In February 2021, three men in Baltimore were arrested for posing as a Moderna website and trying to sell fraudulent vaccines.
The problem is relevant globally rather than just in the United States and even Europe has been seizing false vaccines.
## What it does
MedBlock uses blockchain technology to verify that the location you are getting vaccinated at legally acquired the vaccination dosage and is not a counterfeit drug. Users are easily able to make this validation by providing the name of the vaccine and the company name of the hospital/organization administering the vaccination. In addition, our application also provides users with information on other vaccinations.
## How we built it
We used Rust to build our blockchain so we can have both speed and safety. We also used Rocket to create a REST API so we can information on the blocks that are on the blockchain.
We pair our backend service with a React Native application so users can verify their vaccines through an app on their phones.
## Challenges we ran into
Our biggest challenge was applying blockchain technology to our project since we are all new to the blockchain field. after conducting the research we also ran into some challenges using Rust since it is a programming language we are not too familiar with.
## Accomplishments that we're proud of
We most proud of the application we have built and the impact we believe it can make on the world.
## What we learned
Throughout this hackathon we learned how powerful blockchain is and how great of a language Rust is!
## What's next for MedBlock
Due to the time constraints of this weekend, our implementation was centered around vaccines but with more time we would like to implement it for all types of pharmaceuticals in order to diminish the presence of counterfeit medicine internationally!
|
## Inspiration
There are 1.1 billion people without Official Identity (ID). Without this proof of identity, they can't get access to basic financial and medical services, and often face many human rights offences, due the lack of accountability.
The concept of a Digital Identity is extremely powerful.
In Estonia, for example, everyone has a digital identity, a solution was developed in tight cooperation between the public and private sector organizations.
Digital identities are also the foundation of our future, enabling:
* P2P Lending
* Fractional Home Ownership
* Selling Energy Back to the Grid
* Fan Sharing Revenue
* Monetizing data
* bringing the unbanked, banked.
## What it does
Our project starts by getting the user to take a photo of themselves. Through the use of Node JS and AWS Rekognize, we do facial recognition in order to allow the user to log in or create their own digital identity. Through the use of both S3 and Firebase, that information is passed to both our dash board and our blockchain network!
It is stored on the Ethereum blockchain, enabling one source of truth that corrupt governments nor hackers can edit.
From there, users can get access to a bank account.
## How we built it
Front End: | HTML | CSS | JS
APIs: AWS Rekognize | AWS S3 | Firebase
Back End: Node JS | mvn
Crypto: Ethereum
## Challenges we ran into
Connecting the front end to the back end!!!! We had many different databases and components. As well theres a lot of accessing issues for APIs which makes it incredibly hard to do things on the client side.
## Accomplishments that we're proud of
Building an application that can better the lives people!!
## What we learned
Blockchain, facial verification using AWS, databases
## What's next for CredID
Expand on our idea.
|
losing
|
## Inspiration
Our inspiration for developing the Cov-Misinfo Tracker was twofold: the first being the lack of resources related to determining root causes of health-related misinformation that we see online, which is particularly evident with COVID-19, and the second was our own drive to educate people about the pandemic.
Fake news and other forms of misinformation can be found everywhere on social media, but how much of it is posted in earnestness? Do the people posting (and re-posting) this misinformation do so innocently believing it to be true, or are they responsible for its creation? Are they trolling, or are they woefully misinformed? If they are misinformed, how could we best help to combat this misinformation?
Google Cloud's Natural Language Processing API, in concert with Twint, a twitter scraping tool, and Firebase, a webapp development platform, gave us the opportunity to find out the answers to these questions ourselves.
## What it does
The Cov-Misinfo Tracker identifies tweets that potentially carry misinformation about COVID-19 by keying in on the queries associated with it, as well as analyzes the tweets to determine key words typically used in non-factual COVID-19 statements and categorizes them based on how often they occur in non-factual statements. Using Twint, the app searches for tweets containing popular covid conspiracy theory related phrases, such as 'plandemic' and 'masks off america', we were able to find a seemingly endless supply of covid misinformation. Firebase's Firestore allowed us to quickly and easily store these tweets, and their associated metadata, for later processing and review. The Google Cloud NLP API gave us the ability to establish the veracity of the tweets we scraped, as well as identify which words showed up most often in genuine fake covid news
## How our team built it
We used a Python structure and various other API's to make this work. Specifically, we used the Twint tool to query and extract specific tweets, the Google Cloud NLP API to analyze and extract key phrases that correspond to misinformation, and the Google Cloud Firebase API to store the analyzed information to later be displayed to the user.
## Challenges we ran into
The two greatest challenges encountered as a team during the hacking phase were issues with API initialization and difficulties in integrating the disparate parts. Individually, we each encountered difficulties with environment setup for the APIs we had chosen to work with.
## Accomplishments that we're proud of
We're happy to say that we were able to get our individual tools working, and were able to integrate them with relative success. While it's not an automatic transition between the various API's and tools, it's pretty close! The Twint output and natural language processing work together smoothly and export to the Firebase, which would then be displayed back to the user.
## What's next for the Cov-Misinfo Tracker
We think there is a definite need for an application like the Cov-Misinfo Tracker. Misinformation isn't just limited to the COVID-19 pandemic and when the next pandemic hits we'd like to have a functioning system in place to provide an ease of mind as well as deliver correct facts to people. Additionally other industries may benefit from using a dedicated misinformation system similar to ours.
Going forward we plan to completely automate most functions of the Cov-Misinfo Tracker, complete the integration process with Google Cloud services, and expand the abilities of the webapp.
Check out the GITHUB and youtube demo!
Team members:
Aziz Uddin,
Adithya Shankar,
Ryan Walsh
|
## Inspiration
Social media has been shown in studies that it thrives on emotional and moral content, particularly angry in nature. In similar studies, these types of posts have shown to have effects on people's well-being, mental health, and view of the world. We wanted to let people take control of their feed and gain insight into the potentially toxic accounts on their social media feed, so they can ultimately decide what to keep and what to remove while putting their mental well-being first. We want to make social media a place for knowledge and positivity, without the anger and hate that it can fuel.
## What it does
The app performs an analysis on all the Twitter accounts the user follows and reads the tweets, checking for negative language and tone. Using machine learning algorithms, the app can detect negative and potentially toxic tweets and accounts to warn users of the potential impact, while giving them the option to act how they see fit with this new information. In creating this app and its purpose, the goal is to **put the user first** and empower them with data.
## How We Built It
We wanted to make this application as accessible as possible, and in doing so, we made it with React Native so both iOS and Android users can use it. We used Twitter OAuth to be able to access who they follow and their tweets while **never storing their token** for privacy and security reasons.
The app sends the request to our web server, written in Kotlin, hosted on Google App Engine where it uses Twitter's API and Google's Machine Learning APIs to perform the analysis and power back the data to the client. By using a multi-threaded approach for the tasks, we streamlined the workflow and reduced response time by **700%**, now being able to manage an order of magnitude more data. On top of that, we integrated GitHub Actions into our project, and, for a hackathon mind you, we have a *full continuous deployment* setup from our IDE to Google Cloud Platform.
## Challenges we ran into
* While library and API integration was no problem in Kotlin, we had to find workarounds for issues regarding GCP deployment and local testing with Google's APIs
* Since being cross-platform was our priority, we had issues integrating OAuth with its requirement for platform access (specifically for callbacks).
* If each tweet was sent individually to Google's ML API, each user could have easily required over 1000+ requests, overreaching our limit. Using our technique to package the tweets together, even though it is unsupported, we were able to reduce those requests to a maximum of 200, well below our limits.
## What's next for pHeed
pHeed has a long journey ahead: from integrating with more social media platforms to new features such as account toxicity tracking and account suggestions. The social media space is one that is rapidly growing and needs a user-first approach to keep it sustainable; ultimately, pHeed can play a strong role in user empowerment and social good.
|
## Inspiration
Small scale braille printers cost between $1800 and $5000. We think that this is too much money to spend for simple communication and it has acted as a barrier for many blind people for a long time. We plan to change this by offering a quick, affordable, precise solution to this problem.
## What it does
This machine will allow you to type a string (word) on a keyboard. The raspberry pi then identifies what was entered and then controls the solenoids and servo to pierce the paper. The solenoids do the "printing" while the servo moves the paper.
A close-up video of the solenoids running: <https://www.youtube.com/watch?v=-jSG96Br3b4>
## How we built it
Using a raspberry pi B+, we created a script in python that would recognize all keyboard characters (inputted as a string) and output the corresponding Braille code. The raspberry pi is connected to 4 circuits with transistors, diodes and solenoids/servo motor. These circuits control the how the paper is punctured (printed) and moved.
The hardware we used was: 4x 1n4004 diodes, 3 ROB-11015 solenoids, 4 TIP102 transistors, a Raspberry Pi B+, Solarbotic's GM4 servo motor, its wheel attachment, a cork board, and a bunch of Lego.
## Challenges we ran into
The project initially had many hardware/physical problems which caused errors while trying to print braille. The solenoids were required to be in a specific place in order for it to pierce paper. If the angle was incorrect, the pins would break off or the paper stuck to them. We also found that the paper would jam if there were no paper guards to hold the paper down.
## Accomplishments that we are proud of
We are proud of being able to integrate hardware and software into our project. Despite being unfamiliar with any of the technologies, we were able to learn quickly and create a fun project that will make a difference in the world.
## What we learned
None of us had any knowledge of python, raspberry pi, or how solenoids functioned. Now that we have done this project, we are much more comfortable in working with these things.
## What's next for Braille Printer
We were only able to get one servo motor which meant we could only move paper in one direction. We would like to use another servo in the future to be able to print across a whole page.
|
losing
|
## Inspiration
Ecoleafy was inspired by the desire to create a sustainable and energy-efficient future. With the increasing need for smart homes, the team saw an opportunity to build a system that could help households reduce and be more aware of their carbon footprint and save on energy costs. The members of our team, being all in Computer Engineering and wanting to specialize in the Hardware field, we were particularly motivated to take on this challenge.
## What it does
Ecoleafy is a comprehensive IOT ecosystem that includes smart nodes for each room in the house. These nodes contain sensors such as presence sensors, humidity sensors, and temperature sensors, as well as switches and relays to control various devices such as the AC, heater, lights, and more. The system connects to a central back-end server that manages each nodes. This back-end server can automatically take decisions such as shutting off the lights, or the heater if no activity was recorded in the room for more than 30 minutes for example.
## How we built it
We built Ecoleafy as a comprehensive IOT ecosystem that includes smart nodes for each room in the house, connected to a central back-end server using Java, Spring, WebSockets, and MongoDB. The frontend was built using Flutter and Dart, providing a seamless user experience for web, mobile or desktop. The hardware nodes, which include presence sensors, humidity sensors, temperature sensors, switches, and relays, were developed using ESP32 microcontrollers and coded in C++. We aimed to beat the price of a Google Learning Nest Thermostat (329$ CAD) and its non-learning version (179$ CAD). Our final price per node is estimated to be at 11.60$ CAD before adding an enclosure (which could be 3D printed). This includes the MCU (ESP32), LCD screen, 4 relays, presence/motion, temperature and humidity sensors. You also need one Hub to host the server which could use a raspberry pi (45$ CAD) and the alternative would be a cloud subscription service.
## Challenges we ran into
One of the biggest challenges we faced was ensuring that the system would be plug-n-play if you wanted to add new nodes to your network. The other biggest challenge was the front-end development since most of us had no front-end dev experience prior to this project. We also had to ensure that the system was scalable and could handle a large number of connected devices.
## Accomplishments that we're proud of
We are proud of this being our first Hackaton devpost. The seamless integration of hardware and software components is another achievement that we're proud of.
## What we learned
We learned that building an IOT ecosystem is a complex and challenging process, but with the right team, it is possible to create a functional and innovative system. We also gained valuable experience in developing secure and scalable systems, as well as in using various technologies such as Java, Spring, Flutter, and C++. We also now understand the importance of having a front-end developer.
## What's next for Ecoleafy
We will be seeking ways to improve Ecoleafy and make it even more effective in reducing energy consumption. In the future, we could plan to expand the system to include new features and functionalities, such as integration with other smart home devices and **machine learning algorithms for more efficient energy management**. We are also exploring ways to make Ecoleafy more accessible to a wider audience with its low cost of entry.
|
## Inspiration
We wanted to make energy monitoring accessible and simple and allow the user to develop greater total control over their energy consumption.
## What it does
We have built a cloud-enabled meter that measures energy consumption per appliance. It allows all the
## How we built it
We have built the meter using Raspberry Pi and two Arduinos. Then we built a web app using express to demonstrate the data gathered from the meter. By using the web app, users can monitor their energy consumption in real-time, pay for their energy, take advantage of our state-of-the-art energy usage optimizer.
## Challenges we ran into
* Making graphs display data in real-time
* Building a secure payment system
* Building energy usage optimizer
## Accomplishments that we're proud of
Everything works
Building a scalable application
## What we learned
Using Raspberry Pi's wifi is not very optimal for real-time stats
## What's next for Electro
* Implement machine learning into energy usage organizer
* Store payment history in a blockchain
|
## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
|
losing
|
## 💡 Inspiration
Manga are Japanese comics, considered to form a genre unique from other graphic novels. Similar to other comics, it lacks a musical component. However, their digital counterparts (such as sites like Webtoons) have innovated on their take on the traditional format with the addition of soundtracks, playing concurrently with the reader's progression through the comic. It can create an immersive experience for the reader building the emotion on screen. While Webtoon’s take on incorporating music is not mainstream, we believe there is potential in building on the concept and making it mainstream in online manga. Imagine how cool it would be to generate a soundtrack to the story unfolding. Who doesn't enjoy personalized music while reading?
## 💻 What it does
1. Users choose a manga chapter to read (in our prototype, we're using just one page).
2. Sentiment analysis is performed on the dialogue of the manga.
3. The resulting sentiment is used to determine what kind of music is fed into the song-generating model.
4. A new song will be created and played while the user reads the manga.
## 🔨 How we built it
* Started with brainstorming
* Planned and devised a plan for implementation
* Divided tasks
* Implemented the development of the project using the following tools
*Tech Stack* : Tensorflow, Google Cloud (Cloud Storage, Vertex AI), Node.js
Registered Domain name : **mangajam.tech**
## ❓Challenges we ran into
* None of us knew machine learning at the level that this project demanded of us.
* Timezone differences and the complexity of the project
## 🥇 Accomplishments that we're proud of
The teamwork of course!! We are a team of four coming from three different timezones, this was the first hackathon for one of us and the enthusiasm and coordination and support were definitely unique and spirited. This was a very ambitious project but we did our best to create a prototype proof of concept. We really enjoyed learning new technologies.
## 📖 What we learned
* Using TensorFlow for sound generation
* Planning and organization
* Time management
* Performing Sentiment analysis using Node.js
## 🚀 What's next for Magenta
Oh tons!! We have many things planned for Magenta in the future.
* Ideally, we would also do image recognition on the manga scenes to help determine sentiment, but it's hard to actualize because of varying art styles and genres.
* To add more sentiments
* To deploy the website so everyone can try it out
* To develop a collection of Manga along with the generated soundtrack
|
## Inspiration
Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and
## What it does
A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform.
## How we built it
We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves.
## Challenges we ran into
We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities.
## Accomplishments that we're proud of
We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution.
## What we learned
Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0.
## What's next for BrAInstorm
We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
|
## Inspiration
## What it does
The SmartBox Delivery Service allows independent couriers and their customers to easily deliver and securely receive packages. The delivery person starts by scanning the package, and the system opens and reserves a SmartBox for them, for every package they scan. When they deliver the package in the SmartBox, an indicator shows its presence, and they can mark it as delivered to lock the box. The customer can then scan the SmartBox to unlock it and retrieve their package. Amazon Alexa is also used to notify the customer of different events throughout the delivery.
## How we built it
Our system is comprised of 3 main components: the client application, the backend service, and the smart boxes themselves. We built it by leveraging Solace as a messaging service to enable instant communication between each component. This allows for instant notifications to the Amazon Echo, and perfect feedback when reserving, locking, and unlocking the SmartBox.
## Challenges we ran into
We faced several challenges during development. The first was that the node.js client for Solace was difficult to use. After spending several hours trying to use it, we had to switch to another library and so, lots of time was lost. The second major challenge was that we first tried to use React Native to build our client application. We found it to be very tedious to setup and use as well. More specifically, we spent a lot of time trying to implement NFC in our app, but React Native did not seem to have any reasonable way of reading the tags. We therefore had to make a web-app instead, and switch to QR codes.
## Accomplishments that we're proud of
We're proud of building a fully working hack that includes a good portion of hardware, which was difficult to get right. We're also happy we managed to leverage Solace, as IoT is a perfect application of their software, and empowers our hack.
## What we learned
We learned about Solace and Publisher/Subscriber architecture, as well as making reactive hardware technology using arduino.
|
winning
|
## Inspiration
I was inspired by popular sites like couchsurfing.com that let people traveling Europe stay with hosts. In this case, rather than a service with monetary compensation, the service is provided for those who have been affected by natural disasters. The guests may wish to tip as much as they want and are able to.
California was recently struck by several wildfires that left many without their homes. These people had to find family or friends to stay with or test their luck with last minute hotel rooms. I wanted to make their lives easier by having a one-stop website that finds them a place to stay while they figure out their next steps.
## What it does
This website connects those who have been affected by natural disasters to hosts that volunteer to take them in. The main page describes the roles of hosts and guests. The responsibilities of both parties are listed on the next page. There is a page for guests to input their needs and a page for hosts to match their services to the accommodations required by a guest.
The website is very simple in design to allow people who are in urgent need of safety and shelter to get straight to finding shelter. The object of this website is not to do business, to attract or distract. Its sole purpose is to provide accommodations for those who need to find safety and shelter quickly.
## How we built it
I used WIX to create and design a website. Then I used WIX code to create a database that connects to the form on the website. I then called upon entries in the database and displayed them in the page meant for hosts to find guests.
## Challenges we ran into
I was not sure how to match the guests and hosts and decided that private emails would be the best option. I left the finalization process to the hosts and their chosen guests to allow them to keep their details private.
Additionally, I struggled to connect the database entries to the fid a guest page. I wanted to access all connected entries (entries in one row for one quest request) and iterate through sets of entries. I took a while to access these entries in groups and display them on separate elements of the website. Eventually, I began to understand the WIX code feature and was able to attain this feature.
## Accomplishments that we're proud of
I was able to create a database that stores the guest requests and allows hosts to easily access each database entry in a pleasing format.
## What we learned
I learned how to create a database and access connected entries on another page of the website. I also learned how to use the WIX platform for creating websites and integrating written code and APIs into it.
## What's next for Home Away From Home
I would like to turn this website into a nonprofit startup. I believe that in places with high risk of fires, earthquakes, hurricanes and other natural disasters, such a service would be very useful.
I would like to enhance the email feature for the website to include a pop-up that allows hosts to email potential guests straight from the website. Additionally, I want to incorporate a notifications system that sends guests texts when hosts view their profiles and notifies hosts when new guest profiles are added to the website. Currently, there is an email subscription service available for hosts only.
I would also like to allow guests to be able to delete their own profiles once their accommodation needs have been met.
|
## Inspiration
A member of our team was directly affected by the wildfires in California last quarter. Having experienced firsthand this extremely stressful situation, he acknowledged the need for an easy way for people fleeing large-scale natural disaster find safe havens.
## What it does
Our app aims to connect people in need of a haven with the countless everyday people who can provide it. Though we first thought of this project in the context of fleeing Californian wildfires, there are many more potential applications for this project, including people needing a place to stay when it is dangerously cold outside (e.g., during the polar vortexes that have swept the country in some years), people needing a place to stay because their neighborhood is in the middle of an armed conflict, and recent immigrants needing a place to stay to get their feet on the ground.
## How I built it
We used Ionic Devapp, a platform which allows coders to create apps using web development languages. We used TypeScript, SCSS, and HTML. We also used the Google Maps API to support the maps features.
## Challenges we ran into
Challenges we ran into included git, version control, ionic syntax, and how to write to files.
## Accomplishments that we're proud of
Given that we technically started "hacking" late afternoon on Saturday, we are quite proud of how far we came! Despite early challenges with git and ionic-dev, we were able to produce a preliminary product we are proud to present!
## What I learned
We have definitely learned a lot about git and version control, along with various aspects of the ionic framework and how to use the Google Maps API!
## What's next for Haven
We plan to continue development post-hackathon, and the eventual goal would be to pitch the idea to a complementary company who could then implement it on the large-scale.
## Note
Github repo currently private
|
## Inspiration
We were inspired by the numerous Facebook posts, Slack messages, WeChat messages, emails, and even Google Sheets that students at Stanford create in order to coordinate Ubers/Lyfts to the airport as holiday breaks approach. This was mainly for two reasons, one being the safety of sharing a ride with other trusted Stanford students (often at late/early hours), and the other being cost reduction. We quickly realized that this idea of coordinating rides could also be used not just for ride sharing to the airport, but simply transportation to anywhere!
## What it does
Students can access our website with their .edu accounts and add "trips" that they would like to be matched with other users for. Our site will create these pairings using a matching algorithm and automatically connect students with their matches through email and a live chatroom in the site.
## How we built it
We utilized Wix Code to build the site and took advantage of many features including Wix Users, Members, Forms, Databases, etc. We also integrated SendGrid API for automatic email notifications for matches.
## Challenges we ran into
## Accomplishments that we're proud of
Most of us are new to Wix Code, JavaScript, and web development, and we are proud of ourselves for being able to build this project from scratch in a short amount of time.
## What we learned
## What's next for Runway
|
losing
|
## Inspiration
Greenhouses require increased disease control and need to closely monitor their plants to ensure they're healthy. In particular, the project aims to capitalize on the recent cannabis interest.
## What it Does
It's a sensor system composed of cameras, temperature and humidity sensors layered with smart analytics that allows the user to tell when plants in his/her greenhouse are diseased.
## How We built it
We used the Telus IoT Dev Kit to build the sensor platform along with Twillio to send emergency texts (pending installation of the IoT edge runtime as of 8 am today).
Then we used azure to do transfer learning on vggnet to identify diseased plants and identify them to the user. The model is deployed to be used with IoT edge. Moreover, there is a web app that can be used to show that the
## Challenges We Ran Into
The data-sets for greenhouse plants are in fairly short supply so we had to use an existing network to help with saliency detection. Moreover, the low light conditions in the dataset were in direct contrast (pun intended) to the PlantVillage dataset used to train for diseased plants. As a result, we had to implement a few image preprocessing methods, including something that's been used for plant health detection in the past: eulerian magnification.
## Accomplishments that We're Proud of
Training a pytorch model at a hackathon and sending sensor data from the STM Nucleo board to Azure IoT Hub and Twilio SMS.
## What We Learned
When your model doesn't do what you want it to, hyperparameter tuning shouldn't always be the go to option. There might be (in this case, was) some intrinsic aspect of the model that needed to be looked over.
## What's next for Intelligent Agriculture Analytics with IoT Edge
|

## 💡INSPIRATION💡
Our team is from Ontario and BC, two provinces that have been hit HARD by the opioid crisis in Canada. Over **4,500 Canadians under the age of 45** lost their lives through overdosing during 2021, almost all of them preventable, a **30% increase** from the year before. During an unprecedented time, when the world is dealing with the covid pandemic and the war in Ukraine, seeing the destruction and sadness that so many problems are bringing, knowing that there are still people fighting to make a better world inspired us. Our team wanted to try and make a difference in our country and our communities, so... we came up with **SafePulse, an app to combat the opioid crisis, where you're one call away from OK, not OD.**
**Please checkout what people are doing to combat the opioid crisis, how it's affecting Canadians and learn more about why it's so dangerous and what YOU can do.**
<https://globalnews.ca/tag/opioid-crisis/>
<https://globalnews.ca/news/8361071/record-toxic-illicit-drug-deaths-bc-coroner/>
<https://globalnews.ca/news/8405317/opioid-deaths-doubled-first-nations-people-ontario-amid-pandemic/>
<https://globalnews.ca/news/8831532/covid-excess-deaths-canada-heat-overdoses/>
<https://www.youtube.com/watch?v=q_quiTXfWr0>
<https://www2.gov.bc.ca/gov/content/overdose/what-you-need-to-know/responding-to-an-overdose>
## ⚙️WHAT IT DOES⚙️
**SafePulse** is a mobile app designed to combat the opioid crisis. SafePulse provides users with resources that they might not know about such as *'how to respond to an overdose'* or *'where to get free naxolone kits'.* Phone numbers to Live Support through 24/7 nurses are also provided, this way if the user chooses to administer themselves drugs, they can try and do it safely through the instructions of a registered nurse. There is also an Emergency Response Alarm for users, the alarm alerts emergency services and informs them of the type of drug administered, the location, and access instruction of the user. Information provided to users through resources and to emergency services through the alarm system is vital in overdose prevention.
## 🛠️HOW WE BUILT IT🛠️
We wanted to get some user feedback to help us decide/figure out which features would be most important for users and ultimately prevent an overdose/saving someone's life.
Check out the [survey](https://forms.gle/LHPnQgPqjzDX9BuN9) and the [results](https://docs.google.com/spreadsheets/d/1JKTK3KleOdJR--Uj41nWmbbMbpof1v2viOfy5zaXMqs/edit?usp=sharing)!
As a result of the survey, we found out that many people don't know what the symptoms of overdoses are and what they may look like; we added another page before the user exits the timer to double check whether or not they have symptoms. We also determined that by having instructions available while the user is overdosing increases the chances of someone helping.
So, we landed on 'passerby information' and 'supportive resources' as our additions to the app.
Passerby information is information that anyone can access while the user in a state of emergency to try and save their life. This took the form of the 'SAVEME' page, a set of instructions for Good Samaritans that could ultimately save the life of someone who's overdosing.
Supportive resources are resources that the user might not know about or might need to access such as live support from registered nurses, free naxolone kit locations, safe injection site locations, how to use a narcan kit, and more!
Tech Stack: ReactJS, Firebase, Python/Flask
SafePulse was built with ReactJS in the frontend and we used Flask, Python and Firebase for the backend and used the Twilio API to make the emergency calls.
## 😣 CHALLENGES WE RAN INTO😣
* It was Jacky's **FIRST** hackathon and Matthew's **THIRD** so there was a learning curve to a lot of stuff especially since we were building an entire app
* We originally wanted to make the app utilizing MERN, we tried setting up the database and connecting with Twilio but it was too difficult with all of the debugging + learning nodejs and Twilio documentation at the same time 🥺
* Twilio?? HUGEEEEE PAIN, we couldn't figure out how to get different Canadian phone numbers to work for outgoing calls and also have our own custom messages for a little while. After a couple hours of reading documentation, we got it working!
## 🎉ACCOMPLISHMENTS WE ARE PROUD OF🎉
* Learning git and firebase was HUGE! Super important technologies in a lot of projects
* With only 1 frontend developer, we managed to get a sexy looking app 🤩 (shoutouts to Mitchell!!)
* Getting Twilio to work properly (its our first time)
* First time designing a supportive app that's ✨**functional AND pretty** ✨without a dedicated ui/ux designer
* USER AUTHENTICATION WORKS!! ( つ•̀ω•́)つ
* Using so many tools, languages and frameworks at once, and making them work together :D
* Submitting on time (I hope? 😬)
## ⏭️WHAT'S NEXT FOR SafePulse⏭️
SafePulse has a lot to do before it can be deployed as a genuine app.
* Partner with local governments and organizations to roll out the app and get better coverage
* Add addiction prevention resources
* Implement google maps API + location tracking data and pass on the info to emergency services so they get the most accurate location of the user
* Turn it into a web app too!
* Put it on the app store and spread the word! It can educate tons of people and save lives!
* We may want to change from firebase to MongoDB or another database if we're looking to scale the app
* Business-wise, a lot of companies sell user data or exploit their users - we don't want to do that - we'd be looking to completely sell the app to the government and get a contract to continue working on it/scale the project. Another option would be to sell our services to the government and other organizations on a subscription basis, this would give us more control over the direction of the app and its features while partnering with said organizations
## 🎁ABOUT THE TEAM🎁
*we got two Matthew's by the way (what are the chances?)*
Mitchell is a 1st year computer science student at Carleton University studying Computer Science. He is most inter tested in programing language enineering. You can connect with him at his [LinkedIn](https://www.linkedin.com/in/mitchell-monireoluwa-mark-george-261678155/) or view his [Portfolio](https://github.com/MitchellMarkGeorge)
Jacky is a 2nd year Systems Design Engineering student at the University of Waterloo. He is most experienced with embedded programming and backend. He is looking to explore various fields in development. He is passionate about reading and cooking. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/chenyuxiangjacky/) or view his [Portfolio](https://github.com/yuxstar1444)
Matthew B is an incoming 3rd year computer science student at Wilfrid Laurier University. He is most experienced with backend development but looking to learn new technologies and frameworks. He is passionate about music and video games and always looking to connect with new people. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-borkowski-b8b8bb178/) or view his [GitHub](https://github.com/Sulima1)
Matthew W is a 3rd year computer science student at Simon Fraser University, currently looking for a summer 2022 internship. He has formal training in data science. He's interested in learning new and honing his current frontend skills/technologies. Moreover, he has a deep understanding of machine learning, AI and neural networks. He's always willing to have a chat about games, school, data science and more! You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-wong-240837124/), visit his [website](https://wongmatt.dev) or take a look at what he's [working on](https://github.com/WongMatthew)
### 🥳🎉THANK YOU WLU FOR HOSTING HAWKHACKS🥳🎉
|
## Inspiration
Inspired by [SIU Carbondale's Green Roof](https://greenroof.siu.edu/siu-green-roof/), we wanted to create an automated garden watering system that would help address issues ranging from food deserts to lack of agricultural space to storm water runoff.
## What it does
This hardware solution takes in moisture data from soil and determines whether or not the plant needs to be watered. If the soil's moisture is too low, the valve will dispense water and the web server will display that water has been dispensed.
## How we built it
First, we tested the sensor and determined the boundaries between dry, damp, and wet based on the sensor's output values. Then, we took the boundaries and divided them by percentage soil moisture. Specifically, the sensor that measures the conductivity of the material around it, so water being the most conductive had the highest values and air being the least conductive had the lowest. Soil would fall in the middle and the ranges in moisture were defined by the pure air and pure water boundaries.
From there, we visualized the hardware set up with the sensor connected to an Arduino UNO microcontroller connected to a Raspberry Pi 4 controlling a solenoid valve that releases water when the soil moisture meter is less than 40% wet.
## Challenges we ran into
At first, we aimed too high. We wanted to incorporate weather data into our water dispensing system, but the information flow and JSON parsing were not cooperating with the Arduino IDE. We consulted with a mentor, Andre Abtahi, who helped us get a better perspective of our project scope. It was difficult to focus on what it meant to truly create a minimum viable product when we had so many ideas.
## Accomplishments that we're proud of
Even though our team is spread across the country (California, Washington, and Illinois), we were still able to create a functioning hardware hack. In addition, as beginners we are very excited of this hackathon's outcome.
## What we learned
We learned about wrangling APIs, how to work in a virtual hackathon, and project management. Upon reflection, creating a balance between feasibility, ability, and optimism is important for guiding the focus and motivations of a team. Being mindful about energy levels is especially important for long sprints like hackathons.
## What's next for Water Smarter
Lots of things! What's next for Water Smarter is weather controlled water dispensing. Given humidity, precipitation, and other weather data, our water system will dispense more or less water. This adaptive water feature will save water and let nature pick up the slack. We would use the OpenWeatherMap API to gather the forecasted volume of rain, predict the potential soil moisture, and have the watering system dispense an adjusted amount of water to maintain correct soil moisture content.
In a future iteration of Water Smarter, we want to stretch the use of live geographic data even further by suggesting appropriate produce for which growing zone in the US, which will personalize the water conservation process. Not all plants are appropriate for all locations so we would want to provide the user with options for optimal planting. We can use software like ArcScene to look at sunlight exposure according to regional 3D topographical images and suggest planting locations and times.
We want our product to be user friendly, so we want to improve our aesthetics and show more information about soil moisture beyond just notifying water dispensing.
|
winning
|
## Inspiration
Navigating Vancouver's roads can be challenging for someone unfamiliar with the area, making it difficult to find a suitable place to enjoy playing your sports in a competitive yet casual setting. While there is a gym on campus, the basketball court is consistently occupied. Thus, nobody wishes to wait in line for 20 minutes and only play a single game.
## What it does
Our web application, **Pickup**, has the capability to display any ongoing pickup game hosted by enthusiasts seeking an additional player to join their match. Our app simplifies the process of connecting like-minded individuals, making it much easier for users to find and engage with others who share similar interests.
Users are able to host games and join others by scouting around the map for the available games. There will be a various amount of sports available to pick from. Users can see the capacity and when the event is going to be held.
## How we built it
We have utilized the **Google Maps API** to explore and experiment with its functionalities.
The Maps serve as our foundation, and we have integrated various components to enhance both the design and functionalities. To achieve this, we used **React** for our front end. In addition, we incorporated other libraries, notably **Chakra UI**, which not only ensures consistency but also enhances the visual appeal of our web application.
## Challenges we ran into
* Difficulty of understanding the usage of Google Maps Api due to its poor documentation
* Learning new tools in such a short notice and implementing them into a full project
* Keeping the spirit up even though we worked on the project all night
## Accomplishments that we're proud of
* We are able to show an eye-catching front end for the UI/UX
* We are able to implement the Google Maps Api successfully
## What we learned
* How important git can be in saving our projects because of crashes, disconnects from the internet , and our silly mistakes.
* How useful the Google Maps Api in making web applications for locating many various things
* How crucial time is in making a fully working project.
## What's next for Pickup
* We would like to provide an auth and validation, so users can host games and save them much better.
* Expand to the whole world, so it would work anywhere!
* Creating a database for more customizable features in the future.
|
## Inspiration
After experiencing the futility of trying to use Facebook groups or messenger to set up pickup basketball games at the RSF, we thought that we should help connect students to other students that are available at given times to also play any given sport.
## What it does
StreetUp currently does not have much functionality. Users can be registered and public games can be created by every user but connecting users to games still does not fully work.
## How I built it
We used an angular.JS front-end with a Flask-based python back-end.
## Challenges I ran into
We had to change our entire back-end within the last 12 hours because the previous database and backend we were using only supported the creation of user accounts (which helped with registration and logins) but could not store data about the games itself.
## Accomplishments that I'm proud of
The three of us only came into this hackathon knowing some Python so having to learn almost everything we used from scratch was rewarding.
## What I learned
We learned the fundamentals of Node.JS, Angular.JS, sqlAlchemy along with efficient uses of the terminal and text editor thanks to one of our great mentors, Dmitri.
## What's next for StreetUp
We want to finish creating the app so it's fully functional with user bases and can effectively store the game data.
|
### Table Number **343**
## Inspiration
One of our team members, Zi, used to work for Tim Hortons. He was tired of waiting for his manager to post the weekly schedules on the corkboard in the staff room. He was tired of having to wait until he came to work to find out about his next week's schedule, he was tired of taking pictures of the schedule for his coworkers, and he was tired of having to pencil in the changes whenever he wanted to modify his schedule, needing to personally contact his coworkers to let them know about the new schedule update.
This gave rise to **Schedulus**, where all of Zi's scheduling nightmares would come to an end.
## What it does
Schedulus helps businesses stay organized by allowing employees to check, post, and update the schedules all from the palm of their hands, in Cisco Spark.
## How we built it
After exploring the basics of Cisco Spark, we decided to add a chatbot to the application. We would use webhooks to communicate between the Cisco Spark API and our Express Server. We stored all the scheduling data through Firebase as it was quick, simple and straightforward to setup.
## Challenges we ran into
Since it was our first time using Cisco Spark's API, we didn't really understand the API documentations at first glance, but with the help of the mentors (who were *extremely* patient and helpful), we were able to use the API to fulfill our purpose.
## Accomplishments that we're proud of
We were able to turn what we thought were just ideas into an actual minimum viable product!
## What we learned
How to use Cisco Spark's API and how to build a Cisco Spark chatbot, how scheduling in businesses can be inefficient from retail stores to fastfood chains to corporations.
## What's next for Schedulus
* We're hoping to scale this into a *monthly* schedule, and perhaps into a *yearly* schedule
* Alerts for adding/dropping shifts
* Converting the current schedule output into a fancier table, improving the UI
* Tracking the amount of hours worked for each week and adding limitations/rules (i.e. part-time workers shouldn't be able to add more shifts than they're allowed to work for)
|
losing
|
## Inspiration
Productivity is hard to harness especially at hackathons with many distractions, but a trick we software developing students found to stay productive while studying was using the “Pomodoro Technique”. The laptop is our workstation and could be a source of distraction, so what better place to implement the Pomodoro Timer as a constant reminder? Since our primary audience is going to be aspiring young tech students, we chose to further incentivize them to focus until their “breaks” by rewarding them with a random custom-generated and minted NFT to their name every time they succeed. This unique inspiration provided an interesting way to solve a practical problem while learning about current blockchain technology and implementing it with modern web development tools.
## What it does
An innovative modern “Pomodoro Timer” running on your browser enables users to sign in and link their MetaMask Crypto Account addresses. Such that they are incentivized to be successful with the running “Pomodoro Timer” because upon reaching “break times” undisrupted our website rewards the user with a random custom-generated and minted NFT to their name, every time they succeed. This “Ethereum Based NFT” can then be both viewed on “Open Sea” or on a dashboard of the website as they both store the user’s NFT collection.
## How we built it
TimeToken's back-end is built with Django and Sqlite and for our frontend, we created a beautiful and modern platform using React and Tailwind, to give our users a dynamic webpage. A benefit of using React, is that it works smoothly with our Django back-end, making it easy for both our front-end and back-end teams to work together
## Challenges we ran into
We had set up the website originally as a MERN stack (MongoDB/Express.js/REACT/Node.js) however while trying to import dependencies for the Verbwire API, to mint our images into NFTs to the user’s wallets we ran into problems. After solving dependency issues a “git merge” produced many conflicts, and on the way to resolving conflicts, we discovered some difficult compatibility issues with the API SDK and JS option for our server. At this point we had to pivot our plan, so we decided to implement the Verbwire Python-provided API solution, and it worked out very well. We intended here to just pass the python script and its functions straight to our front-end but learned that direct front-end to Python back-end communication is very challenging. It involved Ajax/XML file formatting and solutions heavily lacking in documentation, so we were forced to keep searching for a solution. We realized that we needed an effective way to make back-end Python communicate with front-end JS with SQLite and discovered that the Django framework was the perfect suite. So we were forced to learn Serialization and the Django framework quickly in order to meet our needs.
## Accomplishments that we're proud of
We have accomplished many things during the development of TimeToken that we are very proud of. One of our proudest moments was when we pulled an all-nighter to code and get everything just right. This experience helped us gain a deeper understanding of technologies such as Axios, Django, and React, which helped us to build a more efficient and user-friendly platform. We were able to implement the third-party VerbWire API, which was a great accomplishment, and we were able to understand it and use it effectively. We also had the opportunity to talk with VerbWire professionals to resolve bugs that we encountered, which allowed us to improve the overall user experience. Another proud accomplishment was being able to mint NFTs and understanding how crypto and blockchains work, this was a great opportunity to learn more about the technology. Finally, we were able to integrate crypto APIs, which allowed us to provide our users with a complete and seamless experience.
## What we learned
When we first started working on the back-end, we decided to give MongoDB, Express, and NodeJS a try. At first, it all seemed to be going smoothly, but we soon hit a roadblock with some dependencies and configurations between a third-party API and NodeJS. We talked to our mentor and decided it would be best to switch gears and give the Django framework a try. We learned that it's always good to have some knowledge of different frameworks and languages, so you can pick the best one for the job. Even though we had a little setback with the back-end, and we were new to Django, we learned that it's important to keep pushing forward.
## What's next for TimeToken
TimeToken has come a long way and we are excited about the future of our application. To ensure that our application continues to be successful, we are focusing on several key areas. Firstly, we recognize that storing NFT images locally is not scalable, so we are working to improve scalability. Secondly, we are making security a top priority and working to improve the security of wallets and crypto-related information to protect our users' data. To enhance user experience, we are also planning to implement a media hosting website, possibly using AWS, to host NFT images. To help users track the value of their NFTs, we are working on implementing an API earnings report with different time spans. Lastly, we are working on adding more unique images to our NFT collection to keep our users engaged and excited.
|
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :)
|
## Inspiration
In a world where finance is extremely important, everyone needs access to **banking services**. Citizens within **third world countries** are no exception, but they lack the banking technology infrastructure that many of us in first world countries take for granted. Mobile Applications and Web Portals don't work 100% for these people, so we decided to make software that requires nothing more than a **cellular connection to send SMS messages** in order to operate. This resulted in our hack, **UBank**.
## What it does
**UBank** allows users to operate their bank accounts entirely through **text messaging**. Users can deposit money, transfer funds between accounts, transfer accounts to other users, and even purchases shares of stock via SMS. In addition to this text messaging capability, UBank also provides a web portal so that when our users gain access to a steady internet connection or PC, they can view their financial information on a more comprehensive level.
## How I built it
We set up a backend HTTP server in **Node.js** to receive and fulfill requests. **Twilio** with ngrok was used to send and receive the text messages through a webhook on the backend Node.js server and applicant data was stored in firebase. The frontend was primarily built with **HTML, CSS, and Javascript** and HTTP requests were sent to the Node.js backend to receive applicant information and display it on the browser. We utilized Mozilla's speech to text library to incorporate speech commands and chart.js to display client data with intuitive graphs.
## Challenges I ran into
* Some team members were new to Node.js, and therefore working with some of the server coding was a little complicated. However, we were able to leverage the experience of other group members which allowed all of us to learn and figure out everything in the end.
* Using Twilio was a challenge because no team members had previous experience with the technology. We had difficulties making it communicate with our backend Node.js server, but after a few hours of hard work we eventually figured it out.
## Accomplishments that I'm proud of
We are proud of making a **functioning**, **dynamic**, finished product. It feels great to design software that is adaptable and begging for the next steps of development. We're also super proud that we made an attempt at tackling a problem that is having severe negative effects on people all around the world, and we hope that someday our product can make it to those people.
## What I learned
This was our first using **Twillio** so we learned a lot about utilizing that software. Front-end team members also got to learn and practice their **HTML/CSS/JS** skills which were a great experience.
## What's next for UBank
* The next step for UBank probably would be implementing an authentication/anti-fraud system. Being a banking service, it's imperative that our customers' transactions are secure at all times, and we would be unable to launch without such a feature.
* We hope to continue the development of UBank and gain some beta users so that we can test our product and incorporate customer feedback in order to improve our software before making an attempt at launching the service.
|
partial
|
## Inspiration
The inspiration behind the DataSherlock came from a desire to make data analysis and exploration more accessible and interactive for users. We wanted to create a virtual assistant that could not only answer data analysis questions but also generate visualizations and insights, simplifying the process of deriving valuable information from datasets. The name *DataSherlock Bot* embodies the idea of being a detective of data, uncovering hidden insights with precision.
## What it does
The DataSherlock Bot is your trusted companion in the world of data analysis. Here's what it does:
* Data Loading: Users can upload their datasets in CSV format using the bot's user-friendly interface.
* Interactive Data Exploration: Users can ask the bot questions about the data, such as correlations, trends, or specific insights. The bot uses AI-powered natural language processing to understand and respond to user queries.
* Data Visualization: The bot creates informative visualizations, including charts, graphs, and interactive dashboards, to help users better understand their data.
* Insight Generation: DataSherlock provides meaningful insights and explanations about the dataset, helping users uncover valuable information.
Whether you're a data enthusiast or a newcomer to the world of data analysis, DataSherlock Bot simplifies the process, making data exploration and insights *accessible* to all.
## How we built it
DataSherlock was built using a combination of Python libraries and services:
We used Streamlit to create the user interface, making it easy for users to upload datasets and interact with the bot.
Pandas was used for data manipulation, enabling us to load and preprocess datasets.
We integrated OpenAI's GPT-3 to handle natural language interactions and answer questions about the data.
For data visualization, we utilized libraries like Matplotlib, Seaborn, and Plotly to create informative charts and graphs.
We implemented error handling to ensure the bot provides meaningful responses even when faced with unexpected queries.
## Challenges we ran into
Building the DataSherlock Bot presented a few challenges:
* Integration Complexity: Integrating multiple libraries and services, such as Streamlit, GPT-3, and data visualization tools, required careful coordination and troubleshooting.
* Handling Diverse Data: Different datasets have unique characteristics, which required adaptable preprocessing and visualization techniques.
* API Quotas: When using GPT-3, we had to manage API quotas and handle rate limits to ensure uninterrupted user interactions.
## Accomplishments that we're proud of
We take pride in several accomplishments of the DataSherlock Bot project:
* User-Friendly Interface: We designed an intuitive and user-friendly interface that allows users to easily upload datasets and interact with the bot.
* AI-Powered Insights: The integration of OpenAI's GPT-3 enables the bot to provide intelligent and informative responses to user queries.
* Data Visualization: The bot creates visually appealing and interactive data visualizations that enhance data understanding.
## What we learned
During the development of the DataSherlock Bot, we learned several valuable lessons:
* Integration of AI: We discovered how to seamlessly integrate AI models, like OpenAI's GPT-3, to enhance user interactions and answer questions about data analysis.
* Streamlit for Web Apps: We explored the power of Streamlit for creating web applications with user-friendly interfaces. Streamlit allowed us to build an intuitive front-end for our chatbot.
## What's next for DataSherlock Bot
**Enhanced Interactivity:** We plan to further enhance the bot's interactivity, allowing users to perform more advanced analyses and custom visualizations.
**Integration with More Data Sources:** We aim to expand the bot's capabilities by enabling it to fetch data from various sources, not just CSV files.
**Machine Learning Insights:** Incorporating machine learning models for predictive analytics and insights is a future goal.
**Community Engagement:** We look forward to building a community around the DataSherlock Bot, where data enthusiasts can collaborate and share their experiences.
|
## Inspiration
As a researcher at a Yale cancer bio lab, I repeatedly found my time spent browsing large data files to perform routine analyses. This process was tedious, but became even more time-consuming when sharing results with colleagues who would also need to download and interact with the data. At startups and elsewhere, we have all run into the issue of data shareability and accessibility. These experiences prompted us to develop Shareboard, a data platform that allows users to instantly create and share AI-enhanced data dashboards.
## What it does
Shareboard integrates with multiple data sources to streamline the data analysis and sharing workflow. After creating an account with Shareboard, users can upload datasets through SQL connection strings and CSV import (through Supabase). We then create a shareable link that allows all collaborators to view and interact with the data.
Going one step further, Shareboard eliminates the need for technical knowledge of the data pipeline. Using plain natural language, users can ask open-ended questions to Shareboard, which translates the request into SQL queries and executes them against the saved data source. Shareboard returns both raw data and interactive visualizations to enable rapid iteration and conclusions.
## How we built it
Shareboard is primarily a website built with NextJS, TypeScript, TailwindCSS and Mantine UI. We used Vercel for continuous deployment and NodeJS serverless functions for our backend infrastructure. We relied heavily on Supabase for User Authentication, Postgres DB with Realtime Updates, Row Level Security (RLS) to eliminate a conventional backend, and CSV upload. We integrated with the OpenAI GPT-3 Codex Completion endpoint and iterated over several prompt pipelines to create the conversational component. We also created a simple Python command line interface for debugging.
## Challenges we ran into
Data comes in many, many forms, and connecting Shareboard with all of them proved challenging. Using the Postgres connection string format, we were able to support a broad set of database configurations. We also supported CSV files through uploading to the Supabase dashboard. In the future, we would look into connecting NoSQL, Excel, and other data formats.
## Accomplishments that we're proud of
After quickly connecting to the OpenAI API, we were able to steadily optimize several aspects of our natural language features. We tested several OpenAI models and settled on the Codex davinci model trained heavily on code input. We used few-shot prompting to guide the model to generate simple, robust, and efficient SQL queries answering the user's questions. Finally, we include the generated query in the frontend to allow for more technical users to collaborate with the model and build upon its suggested queries. As a result of our effort here, the conversational component is very robust and scales well to unseen schemas.
## What we learned
Through building Shareboard, we gained a greater appreciation for the challenges of building a unified entrypoint to diverse data sources, and the applicability of large language models in simplifying the user experience. We became more proficient in LLM prompting and creating a complementary product to the model's strengths and weaknesses. We were very impressed with the multiple ways Supabase and Vercel sped up the development process, eliminating the need for a standalone backend, database, and OAuth authentication.
## What's next for Shareboard
Unlike most hackathon projects we have worked on in the past, we are very happy with how robust and generalizable Shareboard is. Instead of starting from scratch with best practices, we expect to be able to make just minor changes and launch as a fully-functional product. More concretely, we plan to enable additional data integrations, continue to refine the NLP methods, and expand on the visualizations/output formatting. We are very excited to go to market and validate the product in real-world use cases in academia and industry.
|
## Inspiration
False news. False news. False news everywhere. Before reading your news article in depth, let us help you give you a brief overview of what you'll be ingesting.
## What it does
Our Google Chrome extension will analyze the news article you're about to read and give you a heads up on some on the article's sentiment (what emotion is the article trying to convey), top three keywords in the article, and the categories this article's topic belong to.
Our extension also allows you to fact check any statement by simply highlighting the statement, right-clicking, and selecting Fact check this with TruthBeTold.
## How we built it
Our Chrome extension pulls the url of the webpage you're browsing and sends that to our Google Cloud Platform hosted Google App Engine Python server. Our server is then able to parse the content of the page, determine the content of the news article through processing by Newspaper3k API. The scraped article is then sent to Google's Natural Language API Client which assess the article for sentiment, categories, and keywords. This data is then returned to your extension and displayed in a friendly manner.
Fact checking follows a similar path in that our extension sends the highlighted text to our server, which checks against Google's Fact Check Explorer API. The consensus is then returned and alerted.
## Challenges we ran into
* Understanding how to interact with Google's APIs.
* Working with Python flask and creating new endpoints in flask.
* Understanding how Google Chrome extensions are built.
## Accomplishments that I'm proud of
* It works!
|
losing
|
## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out.
|
## Inspiration
As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born.
## What it does
Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly.
## How we built it
We used Flutter to build the interactive prototype of our Android Application.
## Challenges we ran into
None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced.
We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging.
## Accomplishments that we're proud of
This was our first mobile app we developed (as well as our first hackathon).
## What we learned
This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one.
As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new.
## What's next for Spend2Save
There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress.
We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC.
|
## Inspiration
Did you know that minority groups, on average, have between 9 and 16 percent lower financial literacy scores compared to Caucasians? Additionally, a study conducted by Ohio State University found that an immigrant status lowers financial literacy exam scores by about 27 percent. Learning something new can be extremely daunting, especially for less privileged populations, however, it shouldn’t be. Being financially literate often entails the knowledge needed to make informed decisions pertaining to certain personal and/or business finance areas like real estate, insurance, investing, saving, tax planning and budgeting. Financial literacy shouldn’t be only for privileged populations - and EasyInvest is built to change that.
## What it does
Taking inspiration from award-winning educational apps like Duolingo, our EasyInvest web app guides people through the basics of financial literacy by taking them through introductory concepts in investing, before reinforcing their learning with a series of case studies to simulate the process of making effective financial decisions. Furthermore, unlike other learning tools which leave you hanging after a barrage of learning materials, our website offers a quiz after each course where you can test your knowledge. If you get stuck with any of the prompts, there are multiple hints and prompts along the way or you can ultimately go back to the course page to review the material.
## How we built it
After an extensive brainstorming session, we narrowed down the scope of our app from a comprehensive review of financial literacy information to a visually attractive app that focused on investing, saving, and budgeting. Next, we prototyped the user interface and flow of the app using Figma. After agreeing on the basic layout and features of the app, we began development in ReactJS to create a multi-page website to create components that matched the feel and flow of our Figma prototypes.
## Challenges we ran into
One major challenge we faced was adapting to the limitations of remote communication which made it more difficult to directly demonstrate to each other what we thought was viable given the time constraints of the hackathon. On the technical side, our lack of an effective way to handle version control due to our difficulties syncing Github with Visual Studio Code was a big roadblock because we had to run local copies of the code that may not coordinate well with each other after significant development. A majority of our team were also first time hackers who had not worked in many collaborative tech environments. There was a learning curve in understanding each team members’ roles and how they fit into the project.
## Accomplishments that we're proud of
We are proud that a majority of the visual prototypes created on Figma ended up being implemented onto the final product and the design of the website is extremely simple to navigate and visually pleasing. We’re also proud of being able to help each other understand our separate disciplines and being able to bridge the functionality and visual design components of our website together. Lastly, some of our members had to learn new coding methodologies and languages such as Javascript and ReactJS during the development of EasyInvest, which is an amazing feat.
## What we learned
We learned new coding languages such as React and Javascript and the use of Figma for visual design. We learned how time consuming it can be to implement more complex, reusable components such as sidebars, buttons, and animations and to focus on building the minimum viable product. We also gained insight into the indigenous community and immigrants and the apparent gaps in knowledge regarding budgeting, saving, and investing.
## What's next for EasyInvest
Next steps for EasyInvest include implementing more learning methodologies and best practices for helping less privileged populations. We hope to implement user testing to validate and iterate on our product based on collected data. We want to implement different language options for our learning modules to accommodate those who are new to English. In addition, we hope to build out the rest of our website with more robust interactions. Lastly, we hope to optimize our product for mobile platforms in the future.
|
winning
|
## Inspiration
Our hack was inspired by CANSOFCOM's "Help Build a Better Quantum Computer Using Orqestra-quantum library". Our team was interested in exploring quantum computing so it was a natural step to choose this challenge.
## What it does
We implemented a Zapata QuantumBackend called PauliSandwichBackend to perform Pauli sandwiching on a given gate in a circuit. This decreases noise in near-term quantum computers and returns a new circuit with this decreased noise. It will then run this new circuit with a given backend.
## How we built it
Using python we built upon the Zapata QuantumBackend API. Using advanced math and quantum theory to build an algorithms dedicated to lessening the noise and removing error from quantum computing. We implanted a new error migration technique with Pauli Sandwiching
## Challenges we ran into
We found it challenging to find clear documentation on the subject of quantum computing. Between the new API and for the most of us a first time experience with quantum theory we had to delicate a large chunk of our time on research and trial and error.
## Accomplishments that we're proud of
We are extremely proud of the fact we were able to get so far into a very niche section of computer science. While we did not have much experience we have jump a new experience that a very small group of people actually get work with.
## What we learned
We learned much about facing unfamiliar ground. While we have a strong background in code and math we did run into many challenge trying to understand quantum physics. Not only did this open a new software to us it was a great experience to be put back into the unknown with logic we were unfamiliar with it
## What's next for Better Quantum Computer
We hope to push forward our new knowledge on quantum computers to develop not only this algorithm, but many to come as quantum computing is such an unstable and untap resource at this time
|
## Inspiration
We are a group of friends who are interested in cryptography and Bitcoin in general but did not have a great understanding. However, attending Sonr's panel gave us a lot of inspiration because they made the subject more digestible and easier to understand. We also wanted to do something similar but add a more personal touch by making an educational game on cryptography. Fun fact: the game is set in hell because our initial calculations yielded that we can buy approximately 6666 bananas ($3) with one bitcoin!
## What it does
*Devil's Advocate* explains cryptography and Bitcoin, both complicated topics, in a fun and approachable way. And what says fun like games? The player is hired as Satan's advocate at her company Dante’s Bitferno, trying to run errands for her using bitcoins. During their journey, they face multiple challenges that also apply to bitcoins in real life and learn all about how blockchains work!
## How we built it
We built by using JavaFX as the main groundwork for our application and had initially planned to embed it into a website using Velo by Wix but decided to focus our efforts on the game itself using JavaFX, HTML, and CSS. The main IDE we used was Intellij with git version control integration to make teamwork much easier and more efficient.
## Challenges we ran into
Having to catch a flight from Durham right after our classes, we missed the opening ceremony and started later than most other teams. However, we were quickly able to catch up by setting a time limit for most things, especially brainstorming. Only one of our members knew how to use JavaFX, despite it being the main groundwork for our project. Luckily other members were able to pick it up fairly quickly and were able to move on to a divide and conquer strategy.
## Accomplishments that we're proud of
We are most impressed by what we taught ourselves how to do in a day. For instance, some of our members learned how to use JavaFX others how to use various design software for UX/UI and graphic design. We are also proud of how the artwork turned out, considering that all of them were drawn by hand using Procreate.
## What we learned
While learned a lot of things in such a short amount of time, it definitely took us the most time to learn how to use JavaFX to design fluent gameplay by integrating various elements such as text or images. We also had to research on cryptography to make sure that our knowledge on the subject was correct, considering that we are making an educational game.
## What's next for Devil's Advocate
We plan to continue building more levels beyond the first level and offer explanations on other characteristics of blockchain, such as how it is decentralized, has smart contract, or utilizes a consensus algorithm. We also want to add more sprites to Satan to make her feel more expressive and provide a richer gameplay experience to users.
|
## Inspiration
So as you know data breaching is so common nowadays, any hacker can read and change your data while you are communicating with the receiver. The intruder can manipulate the data which can be very harmful in some cases. Hence to avoid this intruder to read or modify the data I made this app with which people can have an undisclosed communication and no third person can read it.
## What it does
As I said it uses the steganography concept i.e. it hides the information in images. SO a sender can choose an image encrypts it and send it to the receiver with a private key and the receiver can decrypt the same image using the same private key and get the text sent by the sender. It is very hard to tell the difference between the original image and encrypted image and also you can encrypt as much data as you want.
## How we built it
I built it in java using Android studio. Firstly, I selected a template which we will show the end-user and then added some buttons for encrypting and decrypting. So when we click that button the actions which it will perform depends on the Activity listener. So till the button is not pressed it won't perform anything and once clicked it will direct you to the respective page. Then I coded an algorithm for encryption and deception called Novel Steganography and similarly goes for decryption as well. Also added a column for entering the value of the key which will store the value of the key and can be used for decryption.
## Challenges we ran into
While making this project the biggest problem I faced was to choose an Algorithm that is not easy to crack so I ran into many algorithms but some were very easy, some were very complex to code and finally found the Novel Steganography which sues the matrix to decrypt the text hence very hard to crack and also a little easy to code.
## Accomplishments that we're proud of
The biggest accomplishment was that we can enter as bigger text as we want and the resolution of the image won't be affected that was too hard to achieve as the longer the text, the more memory it takes and it decreases the resolution but in my case, we can enter the as big text as we want without affecting the resolution of the image.
## What we learned
I learned many things while learning this project. I had an interest in cyber security and cryptography but couldn't work on it. Due to this project, I got hands-on experience making a project in the cryptography domain. Also, my android studio skills got improved due to this.
## What's next for Image Steganography App
As this app can just encrypt and decrypt the images, further I want to improve it in such a way so that
we can share those images within the app that will be much more convenient for the users.
|
partial
|
## Inspiration
With over 1.25 million road traffic deaths occurring each year and drunk driving accounting for about 40% of that, impaired driving is a very prevalent issue today that endangers not only the driver but innocent bystanders as well. With Ontario reopening from the COVID lockdown, there is a high chance that these statistics may only get worse due to studies in countries like the US and France showing fatality rates increasing by as much as 30% due to increased recklessness and changed habits involving substance use.
## What it does
Through machine learning and data analysis, Carma is able to quickly and accurately determine if a driver will be safe on the road helping to create greater transparency and less risk for Intact, their auto insurance provider. By also introducing a gamification component Carma makes driving safe, rewarding and fun!
When Carma is active, drivers can complete a quick sobriety test in seconds on the app to help confirm that they are in a safe and sober condition to drive. If users do not complete the test prior to driving, the app will prompt them to complete a test after their trip
The test consists of two parts, the first part is a short test that judges your reaction speed based on how fast you tap the dots that appear on the screen and the second part prompts the driver to take a selfie to look for visual indicators of sobriety
Earning Carma points by successfully passing sobriety tests allow Intact to collect data on your driving behaviour and subsequently provide you with up to a 20% discount on your auto-insurance premium when it comes time to renew your policy.
## How we built it
GCP AutoML and TensorFlowJS were used for the image classifier model that looked for visual indicators of sobriety. ReactJS was used to create all user interface components, bringing Carma to life. Firebase and Firestore were used to connect with React and save user data. Finally, Figma was used to design the wireframes of the app and overall user flow
## Challenges we ran into
Non-technical: Our team was unfamiliar with insurance, but we overcame that by reaching out to Intact employees and sponsors at the hackathon to gain more insight on opportunities to solve problems in the insurance industry.
Technical: It was difficult to find images of intoxicated vs sober people, and it was very time consuming to train a neural network on this data using and deploy it to run purely on the client-side (ie. Reactjs web app). This was especially time consuming and difficult because our team has never used GCP AutoML or Tensorflow.js before.
## Accomplishments that we're proud of
We are proud of how complete and polished our project given our tight time constraints of a weekend. Features that we implemented include: user authentication, image classification, saving user scores, and users earning badges as milestones towards completion.
## What we learned
We learned that communication as a team was vital for ensuring that we completed all of our planned features on time.
## What's next for Carma
Moving forward, our team hopes to add a short multiple choice quiz section and an audio recording section that detects intoxication via the user’s voice to the sobriety test to more accurately determine if a driver is in a safe condition to be on the road. Additionally, we want to add the ability to detect when a user is about to drive or finished a drive to the app so that Carma can better prompt and remind users to complete a sobriety test. Additionally, we want to add the ability to detect when a user is about to drive or finished a drive to the app so that Carma can better prompt and remind users to complete a sobriety test. Finally, we would like to eventually expand beyond Intact and make Carma a tool that can be used with a number of large insurance companies to help create a larger impact on keeping the roads safe.
|
## Inspiration
Our inspiration came from how we are all relatively new drivers and terrified of busy intersections. Although speed is extremely important to transport from one spot to another, safety should always be highlighted when it comes to the road because car accidents are the number one cause of death in the world.
## What it does
When the website is first opened, the user is able to see the map with many markers indicating the fact that a fatal collision happened here. As noted in the top legend, the colours represent the different types of collision frequency. When the user specifies an address for the starting and ending location, our algorithm will detect the safest route in order to avoid all potentially dangerous/busy intersections. However, if the route must pass a dangerous intersection, our algorithm will ultimately return it back.
## How we built it
For the backend, we used Javascript functions that took in the latitude and longitude of collisions in order to mark them on the Google Map API. We also had several functions to not only check if the user's path would come across a collision, but also check alternatives in which the user would avoid that intersection.
We were able to find an Excel spreadsheet listing all Toronto's fatal collisions in the past 5 years and copied that into a SQL database. That was then connected to Google SQL to be used as a public host and then using Node.js, data was then taken from it to mark the specified collisions.
For the frontend, we also used a mix of HTML, CSS, Javascript and Node.js to serve the web app to the user. Once the request is made for the specific two locations, Express will read the .JSON file and send information back to other Javascript files in order to display the most optimal and safest path using the Google Map API.
To host the website, a domain registered on Domain.com and launched by creating a simple engine virtual machine on Compute Engine. After creating a Linux machine, a basic Node.js server was set up and the domain was then connected to Google Cloud DNS. After verifying that we did own our domain via DNS record, a bucket containing all the files was stored on Google Cloud and set to be publicly accessible.
## Challenges we ran into
We have all never used Javascript and Google Cloud services before, so challenges that kept arising was our unfamiliarity with new functions (Eg. callback). In addition, it was difficult to set up and host Domain.com since we were new to web hosting. Lastly, Google Cloud was challenging since we were mainly using it to combine all aspects of the project together.
## Accomplishments that We're proud of
We're very proud of our final product. Although we were very new to Javascript, Google Cloud Services, and APIs, my team is extremely proud of utilizing all resources provided at the hackathon. We searched the web, as well as asked mentors for assistance. It was our determination and great time management that pushed us to ultimately finish the project.
## What we learned
We learned about Javascript, Google APIs, and Google Cloud services. We were also introduced to many helpful tutorials (through videos, and online written tutorials). We also learned how to deploy it to a domain in order for worldwide users to access it.
## What's next for SafeLane
Currently, our algorithm will return the most optimal path avoiding all dangerous intersections. However, there may be cases where the amount of travel time needed could be tremendously more than the quickest path. We hope to only show paths that have a maximum 20-30% more travel time than the fastest path. The user will be given multiple options for paths they may take. If the user chooses a path with a potentially dangerous intersection, we will issue out a warning stating all areas of danger.
We also believe that SafeLane can definitely be expanded to first all of Ontario, and then eventually on a national/international scale. SafeLane can also be used by government/police departments to observe all common collision areas and investigate how to make the roads safer.
|
## Inspiration
According to the Canadian Centre for Injury Prevention and Control, in Canada, an average of 4 people are killed every day as a result of impaired driving. In 2019, there were 1,346 deaths caused by impaired driving, which represents approximately 30% of all traffic-related deaths in Canada. Impaired driving is also a leading criminal cause of death in Canada and it is estimated that it costs the country around $20 billion per year. These statistics show that impaired driving is a significant problem in Canada and highlights the need for continued efforts to reduce the number of deaths and injuries caused by this preventable behavior.
## What it does
This program calculates the users blood alcohol concentration using a few input parameters to determine whether it would be safe for the user to drive their vehicle (Using Ontario's recommended blood alcohol concentraition 0.08). The program uses the users weight, sex, alcohol consumed (grams) (shots [1.5oz], wine glass [5oz], beer cup [12oz]), alcoholic beverage. The alcoholic beverage is a local database constructed using some of the most popoular drinks.
With the above parameters, we used the Widmark formula described in the following paper (<https://www.yasa.org/upload/uploadedfiles/alcohol.pdf>). The Widmark is a rough estimate of the blood alcohol concentration and shouldn't be taken as a definitive number. The Widmark formula is:
BAC=(Alcohol consumed in grams / (Body weight in grams \* r)) \* 100
## How we built it
We used ReactJS for front-end and Firebase for the backend. For the Google technology we decided to integrate the Firebase Realtime Database. We store all of the drinks on there so that whenever we reload the page or access the website on different devices, we can continue from where we left off. Your alcohol blood concentration also depends on how much time has passed since you drank the drink, so we are able to store the time and update it continuously to show more accurate results.
## Challenges we ran into
* Incorporating time into elapsed time calculations
* Use State hooks constantly delayed
* Reading data from database
## Accomplishments that we're proud of
* Our first hackathon!
## What we learned
* How to fetch data from a database.
## What's next for Alcohol Monitor
* Photo scan of drink
* More comprehensive alcohol database
* More secure database implementation
* User logins
* Mobile implementation
* BOOTSTRAP!
|
partial
|
# Signal
We tried to build a leap motion application that allowed deaf users to use sign language to communicate.
# What does the service do?
So far, we've gotten A, B, C, and D done (with some success). We learnt quite a lot over the last 16 hours, learning how to use the Leap Motion API, learnt Javascript, and learnt how to implement a neural network to teach our program the different types of signs for each letter. We also learnt the taste of Soylent, and the various forms that Octocat comes in.
# Features
We used Brain.js to teach program all the gestures we've implemented thus far and anytime the program reads a gesture incorrectly, it will learn to correct itself over time.
|
## Inspiration
With apps like DuoLingo, most people have no problem spending time to learn a new language, but few people know sign language. We are a team of Western students looking to break down communication barriers by making sign language everyone's next interactive challenge.
## What it does
We use scikit-learn and Leap Motion to create an interactive way of learning and testing your sign language skills. The web app has a progression of challenges that range from identifying the alphabet, to having your signing recognized and checked.
## How to get into Signtology
1. [Clone this repository on Github](https://github.com/ivanzvonkov/hackwestern)
\*Download the prerequisites in the Requirements.txt file.
2. Open command line
3. Navigate to working directory
4. Run python app.py
5. Copy provided link from command line to browser with '\api' appended
## Challenges we ran into
The API we planned on using didn't end up working, so we had to become familiar with manipulating the frame data from the Leap Motion to define gestures.
## Accomplishments that we're proud of
Getting the leap motion to recognize the letters we sign in front of it was a big step that made the project a lot more exciting to work on.
## What we learned
It's important to stay flexible and pivot your project when your plans don't turn out to be feasible. We had to rethink our original idea on how we would store the Leap data to compare gestures.
## What's next for Signtology
We want to develop curriculum bundles that will test more complex skills (e.g spelling your name, common phrases, etc). These bundles will be available for different age groups and will lead to a proficiency in sign language.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
partial
|
## Inspiration
Given the increase in mental health awareness, we wanted to focus on therapy treatment tools in order to enhance the effectiveness of therapy. Therapists rely on hand-written notes and personal memory to progress emotionally with their clients, and there is no assistive digital tool for therapists to keep track of clients’ sentiment throughout a session. Therefore, we want to equip therapists with the ability to better analyze raw data, and track patient progress over time.
## Our Team
* Vanessa Seto, Systems Design Engineering at the University of Waterloo
* Daniel Wang, CS at the University of Toronto
* Quinnan Gill, Computer Engineering at the University of Pittsburgh
* Sanchit Batra, CS at the University of Buffalo
## What it does
Inkblot is a digital tool to give therapists a second opinion, by performing sentimental analysis on a patient throughout a therapy session. It keeps track of client progress as they attend more therapy sessions, and gives therapists useful data points that aren't usually captured in typical hand-written notes.
Some key features include the ability to scrub across the entire therapy session, allowing the therapist to read the transcript, and look at specific key words associated with certain emotions. Another key feature is the progress tab, that displays past therapy sessions with easy to interpret sentiment data visualizations, to allow therapists to see the overall ups and downs in a patient's visits.
## How we built it
We built the front end using Angular and hosted the web page locally. Given a complex data set, we wanted to present our application in a simple and user-friendly manner. We created a styling and branding template for the application and designed the UI from scratch.
For the back-end we hosted a REST API built using Flask on GCP in order to easily access API's offered by GCP.
Most notably, we took advantage of Google Vision API to perform sentiment analysis and used their speech to text API to transcribe a patient's therapy session.
## Challenges we ran into
* Integrated a chart library in Angular that met our project’s complex data needs
* Working with raw data
* Audio processing and conversions for session video clips
## Accomplishments that we're proud of
* Using GCP in its full effectiveness for our use case, including technologies like Google Cloud Storage, Google Compute VM, Google Cloud Firewall / LoadBalancer, as well as both Vision API and Speech-To-Text
* Implementing the entire front-end from scratch in Angular, with the integration of real-time data
* Great UI Design :)
## What's next for Inkblot
* Database integration: Keeping user data, keeping historical data, user profiles (login)
* Twilio Integration
* HIPAA Compliancy
* Investigate blockchain technology with the help of BlockStack
* Testing the product with professional therapists
|
## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
|
## Story
Mental health is a major issue especially on college campuses. The two main challenges are diagnosis and treatment.
### Diagnosis
Existing mental health apps require the use to proactively input their mood, their thoughts, and concerns. With these apps, it's easy to hide their true feelings.
We wanted to find a better solution using machine learning. Mira uses visual emotion detection and sentiment analysis to determine how they're really feeling.
At the same time, we wanted to use an everyday household object to make it accessible to everyone.
### Treatment
Mira focuses on being engaging and keeping track of their emotional state. She allows them to see their emotional state and history, and then analyze why they're feeling that way using the journal.
## Technical Details
### Alexa
The user's speech is being heard by the Amazon Alexa, which parses the speech and passes it to a backend server. Alexa listens to the user's descriptions of their day, or if they have anything on their mind, and responds with encouraging responses matching the user's speech.
### IBM Watson/Bluemix
The speech from Alexa is being read to IBM Watson which performs sentiment analysis on the speech to see how the user is actively feeling from their text.
### Google App Engine
The backend server is being hosted entirely on Google App Engine. This facilitates the connections with the Google Cloud Vision API and makes deployment easier. We also used Google Datastore to store all of the user's journal messages so they can see their past thoughts.
### Google Vision Machine Learning
We take photos using a camera built into the mirror. The photos are then sent to the Vision ML API, which finds the user's face and gets the user's emotions from each photo. They're then stored directly into Google Datastore which integrates well with Google App Engine
### Data Visualization
Each user can visualize their mental history through a series of graphs. The graphs are each color-coded to certain emotional states (Ex. Red - Anger, Yellow - Joy). They can then follow their emotional states through those time periods and reflect on their actions, or thoughts in the mood journal.
|
winning
|
## Inspiration
According to the Washington Post (June 2023), since Columbine in 1999, more than 356,000 students in the U.S. have experienced gun violence at school.
Students of all ages should be able to learn comfortably and safely within the walls of their classroom.
Quality education is a UN Sustainable Development goal and can only be achieved when the former becomes a reality. As college students, especially in the midst of the latest UNC-Chapel Hill school shooting, we understand threats lie even within the safety of our campus and have grown up knowing the tragedies of school shootings.
This problem is heavily influenced by politics and thus there is an unclear timeline for concrete and effective solutions to be implemented. The intention of our AI model is to contribute a proactive approach that requires only a few pieces of technology but is capable of an immediate response to severe events.
## What it does
Our machine learning model is trained to recognize active threats with displayed weapons. When the camera senses that a person has a knife, it automatically calls 911. We also created a machine learning model that uses CCTV camera footage of perpetrators with guns.
Specifically, this model was meant to be catered towards guns to address the rising safety issues in education. However, for the purpose of training our model and safety precautions, we could not take training data pictures with a gun and thus opted for knives. We used the online footage as a means to also train on real guns.
## How we built it
We obtained an SD card with the IOS for Raspberry Pi, then added the Viam server to the Raspberry Pi. Viam provides a platform to build a machine learning model on their server.
We searched the web and imported CCTV images of people with and without guns and tried to find a wide variety of these types of images. We also integrated a camera with the Raspberry Pi to take additional images of ourselves with a knife as training data. In our photos we held the knife in different positions, different lighting, and different people's hands. The more variety in the photos provided a stronger model. Using our data from both sources and the Viam platform we went through each image and identified the knife or gun in the picture by using a border bounding box functionality. Then we trained two separate ML models, one that would be trained off the images in CCTV footage, and one model using our own images as training data.
After testing for recognition, we used a program that connects the Visual Studio development environment to our hardware. We integrated Twilio into our project which allowed for an automated call feature. In our program, we ran the ML model using our camera and checked for the appearance of a knife. As a result, upon detection of a weapon, our program immediately alerts the police. In this case, a personal phone number was used instead of authorities to highlight our system’s effectiveness.
## Challenges we ran into
Challenges we ran into include connection issues, training and testing limitations, and setup issues.
Internet connectivity presented as a consistent challenge throughout the building process. Due to the number of people on one network at the hackathon, we used a hotspot for internet connection, and the hotspot connectivity was often variable. This led to our Raspberry Pi and Viam connections failing, and we had to restart many times, slowing our progress.
In terms of training, we were limited in the locations we could train our model in. Since the hotspot disconnected if we moved locations, we could only train the model in one room. Ideally, we would have liked to train in different locations with different lighting to improve our model accuracy.
Furthermore, we trained a machine learning model with guns, but this was difficult to test for both safety reasons and a lack of resources to do so. In order to verify the accuracy of our model, it would be optimal to test with a real gun in front of a CCTV camera. However, this was not feasible with the hackathon environment.
Finally, we had numerous setup issues, including connecting the Raspberry Pi to the SSH, making sure the camera was working after setup and configuration, importing CCTV images, and debugging. We discovered that the hotspot that we connected the Raspberry Pi and the laptop to had an apostrophe in its name, which was the root of the issue with connecting to the SSH. We solved the problem with the camera by adding a webcam camera in the Viam server rather than a transform camera. Importing the CCTV images was a process that included reading the images into the Raspberry Pi in order to access them in Viam. Debugging to facilitate the integration of software with hardware was achieved through iteration and testing.
We would like to thank Nick, Khari, Matt, and Hazal from Viam, as well as Lizzie from Twilio, for helping us work through these obstacles.
## Accomplishments that we're proud of
We're proud that we could create a functional and impactful model within this 36 hour hackathon period.
As a team of Computer Science, Mechanical Engineering, and Biomedical Engineering majors, we definitely do not look like the typical hackathon theme. However, we were able to use our various skill sets, from hardware analysis, code compilation, and design to achieve our goals.
Additionally, as it was our first hackathon, we developed a completely new set of skills: both soft and technical. Given the pressure, time crunch, and range of new technical equipment at our fingertips, it was an uplifting experience. We were able to create a prototype that directly addresses a topic that is dear to us, while also communicating effectively with working professionals.
## What we learned
We expanded our skills with a breadth of new technical skills in both hardware and software. We learned how to utilize a Raspberry Pi, and connect this hardware with the machine learning platform in Viam. We also learned how to build a machine learning model by labeling images, training a model for object detection, and deploying the model for results. During this process, we gained knowledge about what images were deemed good/useful data. On the software end, we learned how to integrate a Python program that connects with the Viam machine learning platform and how to write a program involving a Twilio number to automate calling.
## What's next for Project LearnSafe
We hope to improve our machine learning model in a multifaceted manner. First, we would incorporate a camera with better quality and composition for faster image processing. This would make detection in our model more efficient and effective. Moreover, adding more images to our model would amplify our database in order to make our model more accurate. Images in different locations with different lighting would improve pattern recognition and expand the scope of detection. Implementing a rotating camera would also enhance our system. Finally, we would test our machine learning model for guns with CCTV, and modify both models to include more weaponry.
Today’s Security. Tomorrow’s Education.
|
## Inspiration
snore or get pourd on yo pores
Coming into grade 12, the decision of going to a hackathon at this time was super ambitious. We knew coming to this hackathon we needed to be full focus 24/7. Problem being, we both procrastinate and push things to the last minute, so in doing so we created a project to help us
## What it does
It's a 3 stage project which has 3 phases to get the our attention. In the first stage we use a voice command and text message to get our own attention. If I'm still distracted, we commence into stage two where it sends a more serious voice command and then a phone call to my phone as I'm probably on my phone. If I decide to ignore the phone call, the project gets serious and commences the final stage where we bring out the big guns. When you ignore all 3 stages, we send a command that triggers the water gun and shoots the distracted victim which is myself, If I try to resist and run away the water gun automatically tracks me and shoots me wherever I go.
## How we built it
We built it using fully recyclable materials as the future innovators of tomorrow, our number one priority is the environment. We made our foundation fully off of trash cardboard, chopsticks, and hot glue. The turret was built using our hardware kit we brought from home and used 3 servos mounted on stilts to hold the water gun in the air. We have a software portion where we hacked a MindFlex to read off brainwaves to activate a water gun trigger. We used a string mechanism to activate the trigger and OpenCV to track the user's face.
## Challenges we ran into
Challenges we ran into was trying to multi-thread the Arduino and Python together. Connecting the MindFlex data with the Arduino was a pain in the ass, we had come up with many different solutions but none of them were efficient. The data was delayed, trying to read and write back and forth and the camera display speed was slowing down due to that, making the tracking worse. We eventually carried through and figured out the solution to it.
## Accomplishments that we're proud of
Accomplishments we are proud of is our engineering capabilities of creating a turret using spare scraps. Combining both the Arduino and MindFlex was something we've never done before and making it work was such a great feeling. Using Twilio and sending messages and calls is also new to us and such a new concept, but getting familiar with and using its capabilities opened a new door of opportunities for future projects.
## What we learned
We've learned many things from using Twilio and hacking into the MindFlex, we've learned a lot more about electronics and circuitry through this and procrastination. After creating this project, we've learned discipline as we never missed a deadline ever again.
## What's next for You snooze you lose. We dont lose
Coming into this hackathon, we had a lot of ambitious ideas that we had to scrap away due to the lack of materials. Including, a life-size human robot although we concluded with an automatic water gun turret controlled through brain control. We want to expand on this project with using brain signals as it's our first hackathon trying this out
|
## Inspiration
I like looking at things. I do not enjoy bad quality videos . I do not enjoy waiting. My CPU is a lazy fool. He just lays there like a drunkard on new years eve. My poor router has a heart attack every other day so I can stream the latest Kylie Jenner video blog post, or has the kids these days call it, a 'vlog' post.
CPU isn't being effectively leveraged to improve video quality. Deep learning methods are in their own world, concerned more with accuracy than applications. We decided to develop a machine learning application to enhance resolution while developing our models in such a way that they can effective run without 10,000 GPUs.
## What it does
We reduce your streaming bill. We let you stream Kylie's vlog in high definition. We connect first world medical resources to developing nations. We make convert an unrecognizeable figure in a cop's body cam to a human being. We improve video resolution.
## How I built it
Wow. So lots of stuff.
Web scraping youtube videos for datasets of 144, 240, 360, 480 pixels. Error catching, thread timeouts, yada, yada. Data is the most import part of machine learning, and no one cares in the slightest. So I'll move on.
## ML stuff now. Where the challenges begin
We tried research papers. Super Resolution Generative Adversarial Model [link](https://arxiv.org/abs/1609.04802). SRGAN with an attention layer [link](https://arxiv.org/pdf/1812.04821.pdf). These were so bad. The models were to large to hold in our laptop, much less in real time. The model's weights alone consisted of over 16GB. And yeah, they get pretty good accuracy. That's the result of training a million residual layers (actually *only* 80 layers) for months on GPU clusters. We did not have the time or resources to build anything similar to these papers. We did not follow onward with this path.
We instead looked to our own experience. Our team had previously analyzed the connection between image recognition and natural language processing and their shared relationship to high dimensional spaces [see here](https://arxiv.org/abs/1809.05286). We took these learnings and built a model that minimized the root mean squared error as it upscaled from 240 to 480 px.
However, we quickly hit a wall, as this pixel based loss consistently left the upscaled output with blurry edges. In order to address these edges, we used our model as the Generator in a Generative Adversarial Network. However, our generator was too powerful, and the discriminator was lost.
We decided then to leverage the work of the researchers before us in order to build this application for the people. We loaded a pretrained VGG network and leveraged its image embeddings as preprocessing for our discriminator. Leveraging this pretrained model, we were able to effectively iron out the blurry edges while still minimizing mean squared error.
Now model built. We then worked at 4 AM to build an application that can convert videos into high resolution.
## Accomplishments that I'm proud of
Building it good.
## What I learned
Balanced approaches and leveraging past learning
## What's next for Crystallize
Real time stream-enhance app.
|
partial
|
## **opiCall**
## *the line between O.D. and O.K. is one opiCall away*
---
## What it does
Private AMBER alerts for either 911 or a naloxone carrying network
## How we built it
We used Twilio & Dasha AI to send texts and calls, and Firebase & Swift for the iOS app's database and UI itself.
## Challenges we ran into
We had lots of difficulties finding research on the topic, and conducting our own research due to the taboos and Reddit post removals we faced.
## What's next for opiCall
In depth research on First Nations' and opioids to guide our product further.
|
## Inspiration
The inspiration came from working with integrated BMCs on server motherboards. These devices are able to control servers remotely without the need of an operating system on the server. This is accomplished by streaming the servers' VGA input and attaching a keyboard via USB. The major pain points about BMCs are that they are normally embedded on the servers, the streamed keyboard input is prone to errors and that since the screen is streamed the screen is not interactable. Our prescriptive design was a small portable device that has the same functionality as the BMC while solving these pain points and adding smart home functionality features.
## Challenges we ran into
Originally, we planned to use the Raspberry Pi 3 because it had an onboard network module which eliminates one step in the integration process. Unfortunately, the Raspberry Pi 3 cannot emulate USB HID devices such as a keyboard, which made it infeasible to use. Once we switched to the Raspberry Pi Pico, which could emulate a keyboard but didn’t have WIFi, we had to integrate it with a network module ESP8266 or ESP32. We encountered difficulties powering the module through the RPI and later discovered that both require 5v while our RPI only provided 3.3v. Furthermore, the RPI Pico does not have USB ports to support the display capture feature we aimed to implement, instead we added additional features.
## What it does
Marcomancer main features are the following:
* Run Macros Remotely via USB Keyboard Emulation
* Programmable Smart Home Voice Activated Macros
* Robust Input Streaming via Web App
* Portable Plug and Play
These features solve two of the major BMC pain points of no portability and error prone input. The system also allows system administrators to reliably program macros to remote servers. Macromancer is not only for tech savvy individuals but also provides smart accessibility to anyone through voice control on their phones to communicate to their PC. As long as you can accomplish a task with just keyboard inputs, you can use Macromancer to send a macro from anywhere.
## How we built it
Hardware: The RPI Pico that waits for keycodes transmitted by the ESP32. ESP32 polls our cloud server for commands.
Software: Google Assistant sends predefined user commands to the cloud server webhook via IFTTT applets. Users can also use our Frontend Web App to manually queue command strings and macros for the ESP32 to poll via a GET API.
## Accomplishments that we're proud of
* Ability for users to add/delete any new commands
* Integrating with the network modules
* The minimum latency
## What we learned
* Flashing firmware on microprocessors
* Emulating keyboard through microprocessors (Adafruit)
* TCP and Checksum error detection
* Activating webhooks with Google Assistant
* How to communicate between microprocessors
## What's next for Macromancer
* Better security (encryption)
* RPI 4 (reduce hardware and allow display capture for text editing)
* Wake on LAN
|
## Inspiration
Guardian Angel was born from the need for reliable emergency assistance in an unpredictable world. Our experiences with the elderly, such as our grandparents, who may fall when we’re not around, and the challenges we may face in vulnerable situations motivated us to create a tool that automatically reaches out for help when it’s needed most. We aimed to empower individuals to feel safe and secure, knowing that assistance is just a call away, even in their most vulnerable moments.
## What it does
Core to Guardian Angel, our life-saving Emergency Reporter AI speech app, is an LLM and text-to-speech pipeline that provides real-time, situation-critical responses to 911 dispatchers. The app automatically detects distress signals—such as falls or other emergencies—and contacts dispatch services on behalf of the user, relaying essential information like patient biometric data, medical history, current state, and location. By integrating these features, Guardian Angel enhances efficiency and improves success in time-sensitive situations where rapid, accurate responses are crucial.
## How we built it
We developed Guardian Angel using React Native with Expo, leveraging Python and TypeScript for enhanced code quality. The backend is powered by FastAPI, allowing for efficient data handling. We integrated AI technologies, including Google Gemini for voice transcription and Deepgram for audio processing, which enhances our app’s ability to communicate effectively with dispatch services.
## Challenges we ran into
Our team faced several challenges during development, including difficulties with database integration and frontend design. Many team members were new to React Native, leading to styling and compatibility issues. Additionally, figuring out how to implement functions in the API for text-to-speech and speech-to-text during phone calls required significant troubleshooting.
## Accomplishments that we're proud of
We are proud of several milestones achieved during this project. First, we successfully integrated a unique aesthetic into our UI by incorporating hand-drawn elements, which sets our app apart and creates a friendly, approachable user experience. Additionally, we reached a significant milestone in audio processing by effectively transcribing audio input using the Gemini model, allowing us to capture user commands accurately, and converting the transcribed text back to voice with Deepgram for seamless communication with dispatch. We’re also excited to share that our members have only built websites, making the experience of crafting an app and witnessing the fruits of our labor even more rewarding. It’s been exciting to acquire and apply new tools throughout this project, diving into various aspects of transforming our idea into a scalable application—from designing and learning UI/UX to implementing the React Native framework, emulating iOS and Android devices for testing compatibility, and establishing communication between the frontend and backend/database.
## What we learned
Through this hackathon, our team learned the importance of effective collaboration, utilizing a “divide and conquer” approach while keeping each other updated on our progress. We gained hands-on experience in mobile app development, transitioning from our previous focus on web development, and explored new tools and technologies essential for creating a scalable application.
## What's next for Guardian Angel
Looking ahead, we plan to enhance Guardian Angel by integrating features such as smartwatch compatibility for monitoring vital signs like heart rate and improving fall detection accuracy. We aim to refine our GPS location services for better tracking and continue optimizing our AI speech models for enhanced performance. Additionally, we’re exploring the potential for spatial awareness and microphone access to record surroundings during emergencies, further improving our response capabilities.
|
winning
|
## 💡 Inspiration 💯
Have you ever faced a trashcan with a seemingly endless number of bins, each one marked with a different type of recycling? Have you ever held some trash in your hand, desperately wondering if it can be recycled? Have you ever been forced to sort your trash in your house, the different bins taking up space and being an eyesore? Inspired by this dilemma, we wanted to create a product that took all of the tedious decision-making out of your hands. Wouldn't it be nice to be able to mindlessly throw your trash in one place, and let AI handle the sorting for you?
## ♻️ What it does 🌱
IntelliBin is an AI trashcan that handles your trash sorting for you! Simply place your trash onto our machine, and watch it be sorted automatically by IntelliBin's servo arm! Furthermore, you can track your stats and learn more about recycling on our React.js website.
## 🛠️ How we built it 💬
Arduino/C++ Portion: We used C++ code on the Arduino to control a servo motor and an LED based on serial input commands. Importing the servo library allows us to access functions that control the motor and turn on the LED colours. We also used the Serial library in Python to take input from the main program and send it to the Arduino. The Arduino then sent binary data to the servo motor, correctly categorizing garbage items.
Website Portion: We used React.js to build the front end of the website, including a profile section with user stats, a leaderboard, a shop to customize the user's avatar, and an information section. MongoDB was used to build the user registration and login process, storing usernames, emails, and passwords.
Google Vision API: In tandem with computer vision, we were able to take the camera input and feed it through the Vision API to interpret what was in front of us. Using this output, we could tell the servo motor which direction to turn based on if it was recyclable or not, helping us sort which bin the object would be pushed into.
## 🚧 Challenges we ran into ⛔
* Connecting the Arduino to the arms
* Determining the optimal way to manipulate the Servo arm, as it could not rotate 360 degrees
* Using global variables on our website
* Configuring MongoDB to store user data
* Figuring out how and when to detect the type of trash on the screen
## 🎉 Accomplishments that we're proud of 🏆
In a short span of 24 hours, we are proud to:
* Successfully engineer and program a servo arm to sort trash into two separate bins
* Connect and program LED lights that change colors varying on recyclable or non-recyclable trash
* Utilize Google Cloud Vision API to identify and detect different types of trash and decide if it is recyclable or not
* Develop an intuitive website with React.js that includes login, user profile, and informative capabilities
* Drink a total of 9 cans of Monsters combined (the cans were recycled)
## 🧠 What we learned 🤓
* How to program in C++
* How to control servo arms at certain degrees with an Arduino
* How to parse and understand Google Cloud Vision API outputs
* How to connect a MongoDB database to create user authentification
* How to use global state variables in Node.js and React.js
* What types of items are recyclable
## 🌳 Importance of Recycling 🍀
* Conserves natural resources by reusing materials
* Requires less energy compared to using virgin materials, decreasing greenhouse gas emissions
* Reduces the amount of waste sent to landfills,
* Decreasesdisruption to ecosystems and habitats
## 👍How Intellibin helps 👌
**Efficient Sorting:** Intellibin utilizes AI technology to efficiently sort recyclables from non-recyclables. This ensures that the right materials go to the appropriate recycling streams.
**Increased Recycling Rates:** With Intellibin making recycling more user-friendly and efficient, it has the potential to increase recycling rates.
**User Convenience:** By automating the sorting process, Intellibin eliminates the need for users to spend time sorting their waste manually. This convenience encourages more people to participate in recycling efforts.
**In summary:** Recycling is crucial for environmental sustainability, and Intellibin contributes by making the recycling process more accessible, convenient, and effective through AI-powered sorting technology.
## 🔮 What's next for Intellibin⏭️
The next steps for Intellibin include refining the current functionalities of our hack, along with exploring new features. First, we wish to expand the trash detection database, improving capabilities to accurately identify various items being tossed out. Next, we want to add more features such as detecting and warning the user of "unrecyclable" objects. For instance, Intellibin could notice whether the cap is still on a recyclable bottle and remind the user to remove the cap. In addition, the sensors could notice when there is still liquid or food in a recyclable item, and send a warning. Lastly, we would like to deploy our website so more users can use Intellibin and track their recycling statistics!
|
## Inspiration
An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin.
## What it does
The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper
We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown
The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin
## How we built it\
Using Recyclable Cardboard, used dc motors, and 3d printed parts.
## Challenges we ran into
We had to train our Model for the ground up, even getting all the data
## Accomplishments that we're proud of
We managed to get the whole infrastructure build and all the motor and sensors working.
## What we learned
How to create and train model, 3d print gears, use sensors
## What's next for Waste Wizard
A Smart bin able to sort the 7 types of plastic
|
## Inspiration
Since the days of yore, we have heard how an army of drones is going to bridge the last-mile gap of accessibility in the supply chain of various products for many people in the world. A good example of this can already be seen with Zipline in Rwanda and Ghana, which is already on its way to success with this model. While the quest to bridging the last-mile gap is well on it's way to getting completed, there are significant challenges in the overall supply chain that have had no evolution. As technology gets more and more advanced, people get more disillusioned by the possible use cases that are not being explored.
The inspiration for this hack very much stems from the fact that the Treehacks Hardware Lab provided us with a drone that did not come with an SDK, or any developer support. And therefore, I decided to build a whole new approach to voice recognition and instant payments that not only tries to address this growing disillusionment with technology, but also provides a use case for drones, a technology that is no longer *"unimaginable"* or *"out of this world"* that has not yet been explored: *refunds* and *accessbility*.
## What it does
With Dronations, you can ask a voice-activated drone to:
* `TAKEOFF`
* `TOUCHDOWN`
* `TAKE A PICTURE` (using gesture)
* `TRANSFER MONEY TO PERSON IN FRAME`
Now, you can receive instant refunds for a product or an item you have returned after buying. And people with limited mobility can send payments reliably to people they trust using just simple voice commands delivered to a drone.
## How I built it
Since the Snaptain drone provided to me did not come with an SDK, I was left with no choice but to find a way to hack into it. Fortunately, the flimsy drone did not have that much going on in terms of protection from adversarial attacks. After fiddling with the drone for a few hours, it was clear that the best way of trying to break into the drone in order to control it with a computer or connect it to a server would be through the unsecured WiFi it uses to connect to an app on my phone.
I spent another few hours reverse engineering the drone protocol by sniffing packets on the channel and frequency used by this drone to get commands from my phone. For this purpose, I used Airmon-NG on a computer running a distribution of Kali-Linux (thankfully provided by my senior project lab at Stanford). Having figured out how to send basic takeoff, touchdown and image commands to the drone by bypassing its WiFi gave me an immense of control of the drone and now it was time to integrate voice commands.
Out of all the voice recognition softwares on offer, Houndify came through as my top choice. This was because of the intuitive design of adding domains to add more functionality on their app and the fact that the mentors were super helpful while answering questions to help us debug. I build three custom commands for takeoff, touchdown and sending money.
Finally, I integrated Checkbook's API for sending money. The API didn't quite work as expected and it took a while for me to finagle it into making it do what I wanted. However, eventually, everything came together and the result is an end-to-end solution for instant payments using voice-activated drones.
## Challenges I ran into
* Wifi reliability issues on the cheap hardware of a Snaptain drone
* Checkbook API's refusal to accept my POST requests
* Sleep Deprivation?
* The endless bad humor of the team sitting next to me
## Accomplishments that I'm proud of
* Literally hacking into a drone with little to no prior experience hacking an embedded system
* Sniffing packets over the air to reverse engineer drone protocol
* Going through tens of videos showcasing obscure details of packet sniffing
* Making a voice-activated drone
* Integrating with Checkbook's API
* Making new friends in the process of this hackathon
## What I learned
* How to sniff packets on an unsecure HTTP connection
* How to write server level Node.js commands that communicate directly with the terminal
* How to spoof my MAC address
* How to spoof my identity to gain unauthorized access into a device
* How to build an app with Houndify
* How to integrate voice commands to trigger Checkbook's API
* If it exists, there is an AR/VR project of it
## What's next for Dronations: Enabling Instant Refunds for Purchase Returns
* Integrate more features from Checkbook's API to provide an end-to-end payment solution
* Understand more edge use cases for drones - I have barely scratched the surface
* Do user research with people with low accessibility to understand if this can become a viable alternative to send payments reliably to people they trust
* Getting in touch with Snaptain and letting them know how easy it is to hack their drones
|
winning
|


## Inspiration
We wanted to truly create a well-rounded platform for learning investing where transparency and collaboration is of utmost importance. With the growing influence of social media on the stock market, we wanted to create a tool where it will auto generate a list of recommended stocks based on its popularity. This feature is called Stock-R (coz it 'Stalks' the social media....get it?)
## What it does
This is an all in one platform where a user can find all the necessary stock market related resources (websites, videos, articles, podcasts, simulators etc) under a single roof. New investors can also learn from other more experienced investors in the platform through the use of the chatrooms or public stories. The Stock-R feature uses Natural Language processing and sentiment analysis to generate a list of popular and most talked about stocks on twitter and reddit.
## How we built it
We built this project using the MERN stack. The frontend is created using React. NodeJs and Express was used for the server and the Database was hosted on the cloud using MongoDB Atlas. We used various Google cloud APIs such as Google authentication, Cloud Natural Language for the sentiment analysis, and the app engine for deployment.
For the stock sentiment analysis, we used the Reddit and Twitter API to parse their respective social media platforms for instances where a stock/company was mentioned that instance was given a sentiment value via the IBM Watson Tone Analyzer.
For Reddit, popular subreddits such as r/wallstreetbets and r/pennystocks were parsed for the top 100 submissions. Each submission's title was compared to a list of 3600 stock tickers for a mention, and if found, then the submission's comment section was passed through the Tone Analyzer. Each comment was assigned a sentiment rating, the goal being to garner an average sentiment for the parent stock on a given day.
## Challenges we ran into
In terms of the Chat application interface, the integration between this application and main dashboard hub was a major issue as it was necessary to pull forward the users credentials without having them to re-login to their account. This issue was resolved by producing a new chat application which didn't require the need of credentials, and just a username for the chatroom. We deployed this chat application independent of the main platform with a microservices architecture.
On the back-end sentiment analysis, we ran into the issue of efficiently storing the comments parsed for each stock as the program iterated over hundreds of posts, commonly collecting further data on an already parsed stock. This issue was resolved by locally generating an average sentiment for each post and assigning that to a dictionary key-value pair. If a sentiment score was generated for multiple posts, the average were added to the existing value.
## Accomplishments that we're proud of
## What we learned
A few of the components that we were able to learn and touch base one were:
* REST APIs
* Reddit API
* React
* NodeJs
* Google-Cloud
* IBM Watson Tone Analyzer
-Web Sockets using Socket.io
-Google App Engine
## What's next for Stockhub
## Registered Domains:
-stockhub.online
-stockitup.online
-REST-api-inpeace.tech
-letslearntogether.online
## Beginner Hackers
This was the first Hackathon for 3/4 Hackers in our team
## Demo
The apply is fully functional and deployed using the custom domain. Please feel free to try it out and let us know if you have any questions.
<http://www.stockhub.online/>
|
## Inspiration
## What it does
You can point your phone's camera at a checkers board and it will show you all of the legal moves and mark the best one.
## How we built it
We used Android studio to develop an Android app that streams camera captures to a python server that handles
## Challenges we ran into
Detection of the orientation of the checkers board and the location of the pieces.
## Accomplishments that we're proud of
We used markers to provide us easy to detect reference points which we used to infer the orientation of the board.
## What we learned
* Android Camera API
* Computer Vision never works as robust as you think it will.
## What's next for Augmented Checkers
* Better graphics and UI
* Other games
|
## Inspiration
To any financial institution, the most valuable asset to increase revenue, remain competitive and drive innovation, is aggregated **market** and **client** **data**. However, a lot of data and information is left behind due to lack of *structure*.
So we asked ourselves, *what is a source of unstructured data in the financial industry that would provide novel client insight and color to market research*?. We chose to focus on phone call audio between a salesperson and client on an investment banking level. This source of unstructured data is more often then not, completely gone after a call is ended, leaving valuable information completely underutilized.
## What it does
**Structerall** is a web application that translates phone call recordings to structured data for client querying, portfolio switching/management and novel client insight. **Structerall** displays text dialogue transcription from a phone call and sentiment analysis specific to each trade idea proposed in the call.
Instead of loosing valuable client information, **Structerall** will aggregate this data, allowing the institution to leverage this underutilized data.
## How we built it
We worked with RevSpeech to transcribe call audio to text dialogue. From here, we connected to Microsoft Azure to conduct sentiment analysis on the trade ideas discussed, and displayed this analysis on our web app, deployed on Azure.
## Challenges we ran into
We had some trouble deploying our application on Azure. This was definitely a slow point for getting a minimum viable product on the table. Another challenge we faced was learning the domain to fit our product to, and what format/structure of data may be useful to our proposed end users.
## Accomplishments that we're proud of
We created a proof of concept solution to an issue that occurs across a multitude of domains; structuring call audio for data aggregation.
## What we learned
We learnt a lot about deploying web apps, server configurations, natural language processing and how to effectively delegate tasks among a team with diverse skill sets.
## What's next for Structurall
We also developed some machine learning algorithms/predictive analytics to model credit ratings of financial instruments. We built out a neural network to predict credit ratings of financial instruments and clustering techniques to map credit ratings independent of s\_and\_p and moodys. We unfortunately were not able to showcase this model but look forward to investigating this idea in the future.
|
partial
|
## Inspiration
We wanted to reduce global carbon footprint and pollution by optimizing waste management. 2019 was an incredible year for all environmental activities. We were inspired by the acts of 17-year old Greta Thunberg and how those acts created huge ripple effects across the world. With this passion for a greener world, synchronized with our technical knowledge, we created Recycle.space.
## What it does
Using modern tech, we provide users with an easy way to identify where to sort and dispose of their waste items simply by holding it up to a camera. This application will be especially useful when permanent fixtures are erect in malls, markets, and large public locations.
## How we built it
Using a flask-based backend to connect to Google Vision API, we captured images and categorized which waste categories the item belongs to. This was visualized using Reactstrap.
## Challenges I ran into
* Deployment
* Categorization of food items using Google API
* Setting up Dev. Environment for a brand new laptop
* Selecting appropriate backend framework
* Parsing image files using React
* UI designing using Reactstrap
## Accomplishments that I'm proud of
* WE MADE IT!
We are thrilled to create such an incredible app that would make people's lives easier while helping improve the global environment.
## What I learned
* UI is difficult
* Picking a good tech stack is important
* Good version control practices is crucial
## What's next for Recycle.space
Deploying a scalable and finalized version of the product to the cloud and working with local companies to deliver this product to public places such as malls.
|
## Inspiration
One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually.
For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste.
We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates.
## What it does
greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire.
Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration.
## How we built it
We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations.
## Challenges we ran into
With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through.
When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it.
To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio.
## Accomplishments that we're proud of
We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time.
## What we learned
For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application.
## What's next for greenEats
We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon.
We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience.
These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app.
|
## Inspiration
Ever since the creation of department stores, if a customer needed assistance, he or she would have to look for a sales representative. This is often difficult and frustrating as people are constantly on the move. Some companies try to solve this problem by implementing stationary machines such as kiosks. However, these machines can only answer specific questions and they still have to be located. So we wanted to find a better way to feasibly connect customers with in-store employees.
## What it does
Utilizing NFC technology, customers can request help or find more information about their product with just a simple tap. By tapping their phone on the price tag, one's default browser will open up and the customer will be given two options:
**Product link** - Directly goes to the company's website of that specific product
**Request help** - Sends a request to the in store employees notifying them of where you tapped (eg. computers aisle 5)
The customer service representative can let you know that he or she is on the way with the representative's face displayed on the customer's phone so they know who to look for. Once helped, the customer can provide feedback on the in-store employee. Using Azure Cognitive Services, the customer's feedback comments will be translated to a score between 0-100 in which the information will be stored. All of this can be performed without an app, just tapp.
## How We Built It
The basis of the underlying simplicity for the customer is the simple "tap" into our webapp. The NFC sticker stores a URL and the product details - allowing users to bypass the need to install a 3rd party app.
The webapp is powered by node.js, running on a Azure VM. The staff members have tablets that connect directly to our database service, powered by Firebase, to see real-time changes.
The analytics obtained by our app is stored in Azure's SQL server. We use Azure Cognitive Services to identify sentiment level of the customer's feedback and is stored into the SQL server for future analysis for business applications.
## Challenges We Ran Into
Finding a way to record and display the data tapp provides.
## Accomplishments that We're Proud of
* Learning how to use Azure
* Having the prototype fully functional after 36 hours
* Creating something that is easy to use and feasible (no download required)
## What I learned
* How to integrate Azure technology into our apps
* Better understanding of NFC technology
* Setting up a full-stack server - from the frontend to the backend
* nginx reverse proxy to host our two node apps serving with HTTPS
## What's next for Tapp
* We are going to improve on the design and customization options for Tapp, and pitch it to multiple businesses
* We will be bringing this idea forward to an entrepreneurship program at UBC
|
winning
|
# DeskWeb
## Inspiration
We want more cool stuff showing up on our desktops.
Like, some instagram photos on your feed.
## What it does
You write a website, we live-update it to your desktop background.
DeskWeb provides an infrastructure that lets you put any website components on your desktop background, agnostic of frameworks/langauges.
## How I built it
We built the OS X part first, which live-updates your desktop background by watching on an image file change.
Then we built the frontend, which is a website that sends hot updates to the OS X part, which then updates the desktop background.
## Challenges I ran into
1. How to interface between Swift and Node.js
2. How to wait for JS to load and populate the DOM so we could take pictures in a headless-browser.
3. Managing multiple servers which communicate with each other.
## Accomplishments that I'm proud of
1. Live updating your desktop using HTML/CSS/JS website!
2. Coding swift for the first time and successfully glued it to Node.js so we could use OS X API to live-update desktop background.
3. Providing an infrastructure to build your own desktop with unmatched flexbility.
## What I learned
1. OS X & Node.js
2. headless-browser scripting.
3. Image processing.
4. Messaging between servers.
## What's next for DeskWeb
* Polish it so other people can have cool desktops, too!
|
## Inspiration
Motivated by the all too well-known feeling of being broke, we wanted to develop a web application that would help university students tackle on crippling student debt.
## What it does
*Bill Splitting*
For when you want to split the grocery bill your housemates or only want to order **half** the pizza and tracks when your friend has "forgotten" to pay you back afterwards.
*Budgetting*
Manage the millions of bills and subscriptions that you may have forgotten about, complete with category sorting and shared expense tracking.
*Goal Setting*
Always wanted to go on that exotic trip or save for that car you always wanted? Calculate how much and for how long you'll need before your dream comes true, or you could go buy another drink...
## How we built it
Using Django framework, we were able to develop a web application to put together the various features in mind and the addition of bootstrap for CSS flair.
## Challenges we ran into
From simple bugs, Django specific errors to database design decisions where we spent a little more time than we wanted to, but in the end, we managed to get something presentable enough and something that we are proud of.
## Accomplishments that we're proud of
Designing a database where both users of the payee and payor can see what incoming or outcoming debts have been paid or needs to be paid that can be confirmed by the payee whether they received the money or not.
## What we learned
More Django knowledge and ways to avoid syntax errors for any future projects.
## What's next for Broker Than Me
Utilizing any graphing API's to show the user in a more visual sense what their budget entails.
|
## Inspiration
For most, the journey of adopting a dog begins with a breeder or a trainer, but our journey began at a rescue shelter, where we met Laverne. Unable to work with a professional trainer due to Laverne's health, we had to take the process into our own hands. That's when we realized just how difficult and time-consuming dog training is. Not only that but just how hard it is to leave Laverne alone while we do other things. That's why we developed Train-O, to be a trainer, and companion to Laverne.
## What it does
Train-O uses Open-CV motion tracking frameworks to identify and interact with users. This is done by following our furry friends and giving them tasks via audio cues to "Sit!". Upon successful completion of the task, the good boy is rewarded with a tasty treat automatically, conditioning them to listen to future commands. This can be used to train your dog, associating actions with verbal commands; but it can also be used to keep your dog company, following them around and giving them treats.
## How we built it
We combined all of our knowledge on both physical and hardware hacking to create Train-O. On the physical side, we used an Arduino and several motors (and a lot of wiring) to provide a function to the body we built out of cardboard. The mechanisms of treat delivery when a behavior is reinforced, NPN transistor as an audio module switch, H-bridge as a motor driver, a webcam for image recognition and other features are included. On the software side, we trained our own image processing models using computer vision and developed our own algorithms to track and classify poses. We simplified the images down to key points of the body to reduce latency and provide a quicker response, which is important in reinforcing good behavior.
## Challenges we ran into
From the start, our team was set on using Google Cloud to classify poses. However, we struggled with overfitting because of our limited data source. This struggle caused us to explore other solutions, which lead us to develop our own system for classifying images. Additionally, we had to find a way to provide audio cues for each action, which was challenging due to Arduino's limited audio capabilities.
## Accomplishments that we're proud of
Even the simplest parts of Train-O make us very proud, but witnessing the convergence of our hardware and software into one single innovation was extremely rewarding. We were able to create our own classification system and build an entire robot out of tiny miss-matched parts.
## What we learned
We certainly learned a lot: AutoML, OpenCV, Pose Recognition, Arduino Circuits, Serial Communication, and a ton of other things. Some of our members experienced their first hackathon, and they have discovered a new passion, and plan to attend more in the future. An important lesson that was learned early is that communication is key when creating a product with this many parts. We developed the hardware and software in tandem so as to prevent issues when we combined them, and that proved incredibly useful for us. Between lessons in hardware, software, frameworks, tools, and teamwork, we learned quite a lot.
## What's next for Train-O
The best part of Train-O is yet to come; our whole team cannot wait to present Laverne with their new friend and watch as the idea she gave us becomes a reality. Hopefully, we will continue to enhance the algorithms that make Train-O possible and eventually maybe even create another version, but first, we have to get feedback from the most important contributor to the project, Laverne.
|
losing
|
## Inspiration 💥
Let's be honest... Presentations can be super boring to watch—*and* to present.
But, what if you could bring your biggest ideas to life in a VR world that literally puts you *in* the PowerPoint? Step beyond slides and into the future with SuperStage!
## What it does 🌟
SuperStage works in 3 simple steps:
1. Export any slideshow from PowerPoint, Google Slides, etc. as a series of images and import them into SuperStage.
2. Join your work/team/school meeting from your everyday video conferencing software (Zoom, Google Meet, etc.).
3. Instead of screen-sharing your PowerPoint window, screen-share your SuperStage window!
And just like that, your audience can watch your presentation as if you were Tim Cook in an Apple Keynote. You see a VR environment that feels exactly like standing up and presenting in real life, and the audience sees a 2-dimensional, front-row seat video of you on stage. It’s simple and only requires the presenter to own a VR headset.
Intuition was our goal when designing SuperStage: instead of using a physical laser pointer and remote, we used full-hand tracking to allow you to be the wizard that you are, pointing out content and flicking through your slides like magic. You can even use your hands to trigger special events to spice up your presentation! Make a fist with one hand to switch between 3D and 2D presenting modes, and make two thumbs-up to summon an epic fireworks display. Welcome to the next dimension of presentations!
## How we built it 🛠️
SuperStage was built using Unity 2022.3 and the C# programming language. A Meta Quest 2 headset was the hardware portion of the hack—we used the 4 external cameras on the front to capture hand movements and poses. We built our UI/UX using ray interactables in Unity to be able to flick through slides from a distance.
## Challenges we ran into 🌀
* 2-camera system. SuperStage is unique since we have to present 2 different views—one for the presenter and one for the audience. Some objects and UI in our scene must be occluded from view depending on the camera.
* Dynamic, automatic camera movement, which locked onto the player when not standing in front of a slide and balanced both slide + player when they were in front of a slide.
To build these features, we used multiple rendering layers in Unity where we could hide objects from one camera and make them visible to the other. We also wrote scripting to smoothly interpolate the camera between points and track the Quest position at all times.
## Accomplishments that we're proud of 🎊
* We’re super proud of our hand pose detection and gestures: it really feels so cool to “pull” the camera in with your hands to fullscreen your slides.
* We’re also proud of how SuperStage uses the extra dimension of VR to let you do things that aren’t possible on a laptop: showing and manipulating 3D models with your hands, and immersing the audience in a different 3D environment depending on the slide. These things add so much to the watching experience and we hope you find them cool!
## What we learned 🧠
Justin: I found learning about hand pose detection so interesting. Reading documentation and even anatomy diagrams about terms like finger abduction, opposition, etc. was like doing a science fair project.
Lily: The camera system! Learning how to run two non-conflicting cameras at the same time was super cool. The moment that we first made the switch from 3D -> 2D using a hand gesture was insane to see actually working.
Carolyn: I had a fun time learning to make cool 3D visuals!! I learned so much from building the background environment and figuring out how to create an awesome firework animation—especially because this was my first time working with Unity and C#! I also grew an even deeper appreciation for the power of caffeine… but let’s not talk about that part :)
## What's next for SuperStage ➡️
Dynamically generating presentation boards to spawn as the presenter paces the room
Providing customizable avatars to add a more personal touch to SuperStage
Adding a lip-sync feature that takes volume metrics from the Oculus headset to generate mouth animations
|
## Inspiration
Public speaking is greatly feared by many, yet it is a part of life that most of us have to go through. Despite this, preparing for presentations effectively is *greatly limited*. Practicing with others is good, but that requires someone willing to listen to you for potentially hours. Talking in front of a mirror could work, but it does not live up to the real environment of a public speaker. As a result, public speaking is dreaded not only for the act itself, but also because it's *difficult to feel ready*. If there was an efficient way of ensuring you aced a presentation, the negative connotation associated with them would no longer exist . That is why we have created Speech Simulator, a VR web application used for practice public speaking. With it, we hope to alleviate the stress that comes with speaking in front of others.
## What it does
Speech Simulator is an easy to use VR web application. Simply login with discord, import your script into the site from any device, then put on your VR headset to enter a 3D classroom, a common location for public speaking. From there, you are able to practice speaking. Behind the user is a board containing your script, split into slides, emulating a real powerpoint styled presentation. Once you have run through your script, you may exit VR, where you will find results based on the application's recording of your presentation. From your talking speed to how many filler words said, Speech Simulator will provide you with stats based on your performance as well as a summary on what you did well and how you can improve. Presentations can be attempted again and are saved onto our database. Additionally, any adjustments to the presentation templates can be made using our editing feature.
## How we built it
Our project was created primarily using the T3 stack. The stack uses **Next.js** as our full-stack React framework. The frontend uses **React** and **Tailwind CSS** for component state and styling. The backend utilizes **NextAuth.js** for login and user authentication and **Prisma** as our ORM. The whole application was type safe ensured using **tRPC**, **Zod**, and **TypeScript**. For the VR aspect of our project, we used **React Three Fiber** for rendering **Three.js** objects in, **React XR**, and **React Speech Recognition** for transcribing speech to text. The server is hosted on Vercel and the database on **CockroachDB**.
## Challenges we ran into
Despite completing, there were numerous challenges that we ran into during the hackathon. The largest problem was the connection between the web app on computer and the VR headset. As both were two separate web clients, it was very challenging to communicate our sites' workflow between the two devices. For example, if a user finished their presentation in VR and wanted to view the results on their computer, how would this be accomplished without the user manually refreshing the page? After discussion between using web-sockets or polling, we went with polling + a queuing system, which allowed each respective client to know what to display. We decided to use polling because it enables a severless deploy and concluded that we did not have enough time to setup websockets. Another challenge we had run into was the 3D configuration on the application. As none of us have had real experience with 3D web applications, it was a very daunting task to try and work with meshes and various geometry. However, after a lot of trial and error, we were able to manage a VR solution for our application.
## What we learned
This hackathon provided us with a great amount of experience and lessons. Although each of us learned a lot on the technological aspect of this hackathon, there were many other takeaways during this weekend. As this was most of our group's first 24 hour hackathon, we were able to learn to manage our time effectively in a day's span. With a small time limit and semi large project, this hackathon also improved our communication skills and overall coherence of our team. However, we did not just learn from our own experiences, but also from others. Viewing everyone's creations gave us insight on what makes a project meaningful, and we gained a lot from looking at other hacker's projects and their presentations. Overall, this event provided us with an invaluable set of new skills and perspective.
## What's next for VR Speech Simulator
There are a ton of ways that we believe can improve Speech Simulator. The first and potentially most important change is the appearance of our VR setting. As this was our first project involving 3D rendering, we had difficulty adding colour to our classroom. This reduced the immersion that we originally hoped for, so improving our 3D environment would allow the user to more accurately practice. Furthermore, as public speaking infers speaking in front of others, large improvements can be made by adding human models into VR. On the other hand, we also believe that we can improve Speech Simulator by adding more functionality to the feedback it provides to the user. From hand gestures to tone of voice, there are so many ways of differentiating the quality of a presentation that could be added to our application. In the future, we hope to add these new features and further elevate Speech Simulator.
|
## Inspiration + Impact
This project was inspired by a team member’s sibling who underwent cancer treatment, which limited his ability to communicate. American Sign Language (ASL) enabled the family to connect despite extenuating circumstances. Sign language has the power of bringing together hearing, hard-of-hearing, and deaf individuals. The language barrier between ASL-speakers and non-ASL speakers can lead to misdiagnosis in a medical setting, improper incarceration, lack of attention during an emergency situation, and inability to make connections with others. With just the power of sticky notes and a device, learning possibilities are endless.
## Business Validation
According to the National Institute on Deafness and Other Communication Disorders, 13% of people aged 12 or older have hearing loss in both ears. But the Deaf and hard-of-hearing community are not the only ones who can learn ASL. In a survey conducted of college students, 90% desired to learn American Sign Language but felt intimidated about learning a new language. College students are the next generation of doctors, lawyers, technologists. They are our primary target demographic because they are most open to learning and are about to enter the workforce and independent life. In user interviews, people described learning a new language as “time-consuming,” “hard to retain without practice,” and “beautiful because it helps people communicate and form strong bonds.” This age group enjoys playing brain stimulating games like Wordle and Trivia.
## What StickySign Does (Differently)
StickySign makes language stick, one sign at a time. Through gamification, learning ASL becomes more accessible and fun. Signers learn key terms and become energized to practice ASL outside their community. StickySign differentiates itself from education-giants Duolingo and Quizlet. Duolingo, responsible for 60% of the language learning app shares, does not offer sign language. Quizlet, a leading study tool, does not offer unique, entertaining learning games; it is traditional with matching and flashcards. StickySign can be played anywhere. In a classroom -- yes. On your window -- for sure. On someone’s back -- if you want. Where and however you want to set up your sticky notes, you can. Levels and playing environments are endless.
## How We Built It
Hack Harvard’s Control, Alt, Create theme inspired us to extend StickyAR, a winning project from HackMIT’19, and develop it from being a color-based game to making American Sign Language more accessible to learn. We altered and added to their algorithm to make it respond to questions. Using OpenCV’s Contour Recognition software, StickySign fits images of signs on sticky notes and recognizes hand shapes.
## Challenges We Ran Into
We ran into a few challenges (learning opportunities) which helped us grow as developers and spark new interests in technology. Our first challenge was learning the StickyAR code and ideating how to remake it with our use cases in mind. The other roadblock was getting the code and external hardware (projector + video camera) to work in tandem.
## Accomplishments We Are Proud Of
We are proud of our teamwork and continuous encouragement, despite facing several challenges. Resiliency, hard work, comradery, and coffee carried us to the end. Being able to create an impactful webapp to make ASL more accessible is the biggest reward. It is heartwarming to know our project has the potential to break down communication barriers and empower others to feel connected to American Sign Language.
## What We Learned
Before two weeks ago, none of us attended an in person hackathon; by the end of this event we were amazed by how our minds expanded from all the learning. Researching the psychology of gamification and how this strategy acts as an intrinsic motivator was exciting. We learned about the possibilities and limitations of OpenCV through trial and error, YouTube videos, and mentorship. Jaiden was thrilled to teach the group American Sign Language from her experience interacting both inside her studies and within the Deaf community; now we can fingerspell to each other. Completing this project gave us an appreciation for being scrappy and not afraid to try new things.
## What’s Next for StickySign
StickySign was created with the intention of making American Sign Language more accessible. In future iterations, users can graduate from fingerspelling and learn signs for words. Additionally, with computer vision, the device can give the user feedback on their handshape by a machine-learning trained algorithm. We would conduct a proof-of-concept at State of New York University at Binghamton, which has a rich ASL community and need for ASL-speakers. After inputting this feedback and debugging, we foresee expanding StickySign within another State of New York University and having the university system adopt our application, through annual subscriptions.
We have a vision of partnering with Post-It Note to donate supplies (sticky notes and devices) to communities that are under-resourced. This strategic partnership will not only help our application gain traction but also make it visible to those who may not have been able to learn American Sign Language. Further support and application visibility can come from American Sign Language speaking influencers like Nyle DiMarco and Marlee Matlin using their platform to raise awareness for StickySign.
* Tell us how your experience with MLH’s sponsor technology went.
Give us some feedback about any of the MLH sponsored technology you interacted with at this hackathon. Make sure you mention what tech you're reviewing (e.g. Twilio, GitHub, DeSo etc.).
GitHub allowed us to collaborate on our code with the comfort of our individual laptops. Its open-source capabilities enabled us to expand the StickyAR code. The version-control feature let our team go back to previous edits and debug when necessary.
|
winning
|
## 1. Problem Statement
Design of an AI-based webpage that allows users to use a machine-learning algorithm to determine whether their sitting posture is correct or not.
## 2. Motivation & Objectives
Things became online as a result of the Covid pandemic, including office work and student learning. Sitting in front of a computer or laptop incorrectly might cause a variety of health problems. To avoid this, we designed a system that can detect sitting posture and tell the user whether his sitting position is good or bad.
*Objectives :*
To design and develop a posture detecting system.
To integrate this system with a web page so that it is accessible to everyone.
To assist users in avoiding health issues caused by improper posture
## 3. Project Description
*Inspiration:* We are all spending more and more hours at work, sitting at our desks and using our laptops for up to 10 hours each day. Making sure that you sit correctly at your workstation is so important. Setting up your laptop incorrectly negatively affects your posture. Having a poor posture has been known to have a very poor effect on mood Also it can lead to Serious health issues like
1. Back, Neck, and Shoulder Pain
2. Poor Circulation
3. Impaired Lung Function
4. Misaligned Spine
5. Change in Curvature of the Spine
And many more
*How it works:* When you visit the webpage, you will be asked to grant camera access. This AI-based web app will continuously capture a person's video using a webcam while they are seated in front of a laptop. Our AI algorithm will then detect body postures, and if the detected body position is unsatisfactory, the web app will alert the user. The alert appears to be a blurring of the current window with an annoying sound, stopping the user from performing anything until he returns to an ergonomically correct posture.
*Accomplishments that we're proud of:* Successfully completed the frontend and backend in a given period of time.
*What's next for Fix-Posture:* Many improvements can be done in Fix-Posture. Currently, we can use this only on a single browser tab. We are planning to convert the web app into a chrome extension so that it can be used on any tab/window.
## 4. How to use the project
1. To access the web app, go to: <https://pavankhots17.github.io/FixPosture.github.io/>
2. Allow the webpage to access your webcam.
3. Simply click the 'Instant Try!' button if you want to use the web app straight away.
4. Alternatively, scroll down the main page to discover how this system works in general.
5. You may quickly learn more about good and bad postures by going to the Read Me! section.
6. Simply click on the tabs About Us and Contact Us provided in the navigation bar and footer of the website to learn more about us or to contact us.
|
## Inspiration
FroggyFlow is inspired by the constant bother of bad posture, back, and shoulder pain which hampers one's productivity. We want to renovate the way people are sitting in front of their laptop, desktop, or other devices while improving their productivity and health through a series of connected activities and interactions with our web application.
## What it does
Our revolutionizing technology serves users through the form of a web app with the ability for users to sign in or create an account associated with their Gmail powered by Auth0 to smooth out their user experience. Our server then records user's log-in information and other associated activities on our website and store using MongoDB Atlas, including logged study-related activities during each session. With security and efficient run-time, our website allows users to experience features of one of the best productivity app as while as a wellness app available. Users are asked to utilize our hardware technology (integrated with gyroscope and accelerometer on Arduino) if they wish to, later, analyze their posture trend from the on-going productive session. Users are able to enter a productivity session with an aesthetic froggy background and non-disturbing timer countdown. Users are able to either ends a session voluntarily or complete the productive session and enters through a short rest break from their work.
• During the session, while the user clips their 3D printed case with hardware inside, user will be notified through a notification system linked to the user's account to make sure that the user sit up well IN REAL TIME. How does it work? We created training data from prelabeled data as good or bad. Then we input these feature values (xyz angle and acceleration values from the gyroscope) into a RandomForestClassifier model along with their respective labels for training. The model was able to score (accuracy score) of 98% against the test data.
• When an user ends their productive session, they are able to view their past posture trend via MATLAB graphing tool as well as analyze their past history posture progress log (with time stamps) to any past sessions associated with the account via MongoDB Atlas.
• When an user completes a productive session, they are automatically transferred to a rest period with a webcam that prompts the user to do 5 jumping jacks with a system counter. The webcam is developed with AI-generated and OpenCV framework that analyze real-time user's body motion while keeping track of the completeness of user's jumping jack to avoid any accidents by improper form.
• After the user completes their rest period, they are able to go back to a new study/productive session or go view their profile (with analyzed trends of their postures from previous sessions).
Overall, the web app is fully-responsive with many interactive features allowing users to enjoy the smoothest experience while improving their posture and productivity!
## How we built it
• Hardware Integration: Utilized gyroscopes and accelerometers with Arduino to monitor and analyze user posture. In addition, we utilizes 3D printing to create customized cases for the hardware.
• Data Analysis: Employed MATLAB to generate and visualize posture trend graphs from collected data.
• Data Management: Used MongoDB for logging session data securely.
• User Authentication: Implemented Auth0 for secure user account management.
• Real-Time Monitoring: Integrated AI and machine learning with OpenCV to track body motion and ensure accurate exercise counts. Frameworks used are OpenCV (mainly Mediapipepose), Numpy, and TensorFlow.
• Web Interface: Developed a real-time counter and feedback system displayed on the webcam feed for immediate exercise validation.
## Challenges we ran into
• Making the components (ML models) integrated to front-end (e.g. webcam feature in rest session took very long to configure to the web app due to different API requirements.
• We had to make sure the gyro-data ML model was accurate to be able to detect bad posture in real time and send a notification to the user.
• Making hardware work well with our software components. Firstly, 3D printing took many tries (including designing the hardware, prototyping the hardware with different versions to fit with streamlined user experience design.
• Endpoint Not Responding: The server endpoint might not respond due to incorrect URL paths or errors in routing.
• Our project has A LOT of components, and a big challenge was to finish them in the span of 36 hours. We had to downsize after ideation since we had so many ideas! Getting these components to work together was also a challenge since we have multiple backends and they may require different versions of python for example.
## Accomplishments that we're proud of
• Getting the gyroscope data trained and implemented a machine learning model to determine if posture if bad or good, and notifying the user when their posture is bad
• Successfully implemented Mediapipe Pose that uses a webcamera to monitor and count the number of jumping jacks that you do during your break (they have to be proper jumping jacks!) Staying active increases concentration!
• Logged all the posture data (how well your posture was during your study session) and graphed them in Matlab after each session
• Logged data for each user from previous sessions and display this in the profile page
• Implementing FastAPI to connect all the components to the front-end, making sure they work seemelessly
## What we learned
• Implementing a ML model into a full-stack project, from training to testing against real time data.
• A lot about ML and OpenCV for data processing and decision making
• Learned how to log user data into MongoDB Atlas
• Manage users with Auth0, having profiles for each of them
• Graphing and data processing with Matlab
## What's next for FroggyFlow
• Make an online domain for the web application
• Play music during your study session
• More options for different activities and games for the resting time
• Ability to change backgrounds during study session
• Fix webcam with better interface design
|
## Inspiration
We built an AI-powered physical trainer/therapist that provides real-time feedback and companionship as you exercise.
With the rise of digitization, people are spending more time indoors, leading to increasing trends of obesity and inactivity. We wanted to make it easier for people to get into health and fitness, ultimately improving lives by combating these the downsides of these trends. Our team built an AI-powered personal trainer that provides real-time feedback on exercise form using computer vision to analyze body movements. By leveraging state-of-the-art technologies, we aim to bring accessible and personalized fitness coaching to those who might feel isolated or have busy schedules, encouraging a more active lifestyle where it can otherwise be intimidating.
## What it does
Our AI personal trainer is a web application compatible with laptops equipped with webcams, designed to lower the barriers to fitness. When a user performs an exercise, the AI analyzes their movements in real-time using a pre-trained deep learning model. It provides immediate feedback in both textual and visual formats, correcting form and offering tips for improvement. The system tracks progress over time, offering personalized workout recommendations and gradually increasing difficulty based on performance. With voice guidance included, users receive tailored fitness coaching from anywhere, empowering them to stay consistent in their journey and helping to combat inactivity and lower the barriers of entry to the great world of fitness.
## How we built it
To create a solution that makes fitness more approachable, we focused on three main components:
Computer Vision Model: We utilized MediaPipe and its Pose Landmarks to detect and analyze users' body movements during exercises. MediaPipe's lightweight framework allowed us to efficiently assess posture and angles in real-time, which is crucial for providing immediate form correction and ensuring effective workouts.
Audio Interface: We initially planned to integrate OpenAI’s real-time API for seamless text-to-speech and speech-to-text capabilities, enhancing user interaction. However, due to time constraints with the newly released documentation, we implemented a hybrid solution using the Vosk API for speech recognition. While this approach introduced slightly higher latency, it enabled us to provide real-time auditory feedback, making the experience more engaging and accessible.
User Interface: The front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, manages communication between the AI model, audio interface, and user data. This setup allows the machine learning models to run efficiently, providing smooth real-time feedback without the need for powerful hardware.
On the user interface side, the front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, handles the communication between the AI model, audio interface, and the user's data.
## Challenges we ran into
One of the major challenges was integrating the real-time audio interface. We initially planned to use OpenAI’s real-time API, but due to the recent release of the documentation, we didn’t have enough time to fully implement it. This led us to use the Vosk API in conjunction with our system, which introduced increased codebase complexity in handling real-time feedback.
## Accomplishments that we're proud of
We're proud to have developed a functional AI personal trainer that combines computer vision and audio feedback to lower the barriers to fitness. Despite technical hurdles, we created a platform that can help people improve their health by making professional fitness guidance more accessible. Our application runs smoothly on various devices, making it easier for people to incorporate exercise into their daily lives and address the challenges of obesity and inactivity.
## What we learned
Through this project, we learned that sometimes you need to take a "back door" approach when the original plan doesn’t go as expected. Our experience with OpenAI’s real-time API taught us that even with exciting new technologies, there can be limitations or time constraints that require alternative solutions. In this case, we had to pivot to using the Vosk API alongside our real-time system, which, while not ideal, allowed us to continue forward. This experience reinforced the importance of flexibility and problem-solving when working on complex, innovative projects.
## What's next for AI Personal Trainer
Looking ahead, we plan to push the limits of the OpenAI real-time API to enhance performance and reduce latency, further improving the user experience. We aim to expand our exercise library and refine our feedback mechanisms to cater to users of all fitness levels. Developing a mobile app is also on our roadmap, increasing accessibility and convenience. Ultimately, we hope to collaborate with fitness professionals to validate and enhance our AI personal trainer, making it a reliable tool that encourages more people to lead healthier, active lives.
|
losing
|
## Inspiration
```
We were inspired to work on this project to serve to aid the miscommunication that's all too present in our world today. Leaders, professors, and authority figures of all shapes and sizes throughout the ages have had to determine their audience's sentiment for one reason or another while they garnered support or searched for a solution. Single individuals simply do not have the time or ability to interview each person of the audience to find out what they all thought and are usually left to make guesses of how they felt by cheers of support or boos of hate. Our goal was to give those over arching figures another tool to observe their audience to ultimately gain a better understanding of their ideas, feelings, and sentiment.
```
## What it does
```
Sentithink attempts to solve this problem by using the concept of the Internet of Things to track, categorize, and visualize the audible sentiment of a geographically large group of people. Using multiple microphones dispersed throughout an area, Sentithink records 1 minute snippets of sound, transcribes it to text, parses out keywords, and tracks their frequencies over time. These frequencies, and their associated sentiments, are then visualized for the overseers to utilize to essentially see what's trending and how people are interpreting the event.
```
## How we built it
```
Sentithink was built in three main parts consisting of microphone client side code, a Web API backend in Azure, and a front end visualization built in javascript.
```
Client Side: Microphones on machines running Python script to send snippets to Azure API endpoint
Backend: Microsoft Azure Function API to record/produce results from speech to text
Frontend: Javascript utilization of d3.js to show relative frequencies
## Challenges we ran into
```
We weren't exactly sure how to use the microphones on the client side before hand but Google turned out to be a great resource. We did have some prior experience with Azure Cloud services but it seemed most of our trouble came from trying to visualize our data in javascript at the very end.
```
## Accomplishments that we're proud of
```
We were able to set up all Azure aspects of our program:
SQL Database
Azure Function Web App
Front end visualization
Proof of concept demo
Lightweight clientside app
```
## What we learned
```
We got a better understanding of Azure web services, javascript, and python utilization in a connect API driven environment
```
## What's next for Sentithink
```
Get actual IoT devices to scale the product and test our product on a large area
```
|
## Inspiration
Going home for winter break and seeing friends was a great time, but throughout all the banter, I realized that our conversations often took a darker turn, and I worried for my friends' mental health. During the school year, it was a busy time and I wasn't able to stay in touch with my friends as well as I had wanted to. After this realization, I also began to question my own mental health - was I neglecting my health?
We were inspired to build a web app that would increase awareness about how friends were doing mentally that could also provide analytics for ourselves. We thought there was good potential in text due to the massive volumes of digital communication and how digital messages can often reveal some the that may be hidden in everyday communication.
## What it does
It parses user text input into a sentiment score, using Microsoft Azure, where 0 is very negative and 1 is very positive. Over a day, it averages the input for a specific user and logs the text files. Friends of the user can view the weekly emoji graphs, and receive a text message if it seems like the user is going through a rough spot and needs someone to talk to.
We also have an emoji map for displaying sentiments of senders around the world, allowing us to see events that invoke emotional responses in a particular area. We hope that this is useful data for increased global and cultural awareness.
## How we built it
We used React.js for the front-end and used Flask with Python for the backend. We used Azure for the sentiment analysis and Twillio to send text messages.
## Challenges we ran into
One of the biggest bottlenecks was connecting our front-end and back-end. Additionally, we had security concerns regarding cross origin resource sharing that made it much more difficult to interface with all the different databases. We had too many APIs that we wanted to connect that made things difficult too.
## Accomplishments that we're proud of
We were able to create a full-stack app web app on our own, despite the challenges. Some of the members of the team had never worked on front-end before and it was a great, fun experience learning how to use JS, Flask, and HTML.
## What we learned
We learned about full stack web app development and the different languages required. We also became more aware of the moving parts behind a web app, how they communicate with each other, and the challenges associated with that.
## What's next for ment.ally
Our original idea was actually a Chrome extension that could detect emotionally charged messages the user types in real-time and offer alternatives in an attempt to reduce miscommunication potentially hurtful to both sides. We would like to build off of our existing sentiment analysis capabilities to do this. Our next step would be to set up a way to parse what the user is typing and underline any overly strong phrases (similar to how Word underlines misspelt words in red). Then we could set up a database that maps some common emotionally charged phrases with some milder ones and offers those as suggestions, possibly along with the reason (e.g. "words in this sentence can trigger feelings of anger!).
|
## Inspiration
With more people working at home due to the pandemic, we felt empowered to improve healthcare at an individual level. Existing solutions for posture detection are expensive, lack cross-platform support, and often require additional device purchases. We sought to remedy these issues by creating Upright.
## What it does
Upright uses your laptop's camera to analyze and help you improve your posture. Register and calibrate the system in less than two minutes, and simply keep Upright open in the background and continue working. Upright will notify you if you begin to slouch so you can correct it. Upright also has the Upright companion iOS app to view your daily metrics.
Some notable features include:
* Smart slouch detection with ML
* Little overhead - get started in < 2 min
* Native notifications on any platform
* Progress tracking with an iOS companion app
## How we built it
We created Upright’s desktop app using Electron.js, an npm package used to develop cross-platform apps. We created the individual pages for the app using HTML, CSS, and client-side JavaScript. For the onboarding screens, users fill out an HTML form which signs them in using Firebase Authentication and uploads information such as their name and preferences to Firestore. This data is also persisted locally using NeDB, a local JavaScript database. The menu bar addition incorporates a camera through a MediaDevices web API, which gives us frames of the user’s posture. Using Tensorflow’s PoseNet model, we analyzed these frames to determine if the user is slouching and if so, by how much. The app sends a desktop notification to alert the user about their posture and also uploads this data to Firestore. Lastly, our SwiftUI-based iOS app pulls this data to display metrics and graphs for the user about their posture over time.
## Challenges we ran into
We faced difficulties when managing data throughout the platform, from the desktop app backend to the frontend pages to the iOS app. As this was our first time using Electron, our team spent a lot of time discovering ways to pass data safely and efficiently, discussing the pros and cons of different solutions. Another significant challenge was performing the machine learning on the video frames. The task of taking in a stream of camera frames and outputting them into slouching percentage values was quite demanding, but we were able to overcome several bugs and obstacles along the way to create the final product.
## Accomplishments that we're proud of
We’re proud that we’ve come up with a seamless and beautiful design that takes less than a minute to setup. The slouch detection model is also pretty accurate, something that we’re pretty proud of. Overall, we’ve built a robust system that we believe outperforms other solutions using just the webcamera of your computer, while also integrating features to track slouching data on your mobile device.
## What we learned
This project taught us how to combine multiple complicated moving pieces into one application. Specifically, we learned how to make a native desktop application with features like notifications built-in using Electron. We also learned how to connect our backend posture data with Firestore to relay information from our Electron application to our OS app. Lastly, we learned how to integrate a machine learning model in Tensorflow within our Electron application.
## What's next for Upright
The next step is improving the posture detection model with more training data, tailored for each user. While the posture detection model we currently use is pretty accurate, by using more custom-tailored training data, it would take Upright to the next level. Another step for Upright would be adding Android integration for our mobile app, which currently only supports iOS as of now.
|
losing
|
## Inspiration
Every restaurant ever has the most concise and minimalistic menus. We always found ourselves having to manually research the food and drinks provided, as the only thing we see is text. We wanted a way to automate this process to make it more friendly and less bloated, and we believed we could provide clean user friendly solution that provides ultimately what you need to make your decision on what to eat.
## What It Does
What the F--- is That? takes a camera input source focused on text and on a simple tap, an image is captured. Optical Character Recognition is applied on the image and is then parsed into a readable format. What happens next is that it will analyze the text, image search of the product and provide nutritional information of the various food and beverages.
## How We Built It
It uses a camera on a device to take a picture of the text, and then we applied an Optical Character Recognition API on it and parse it into a string, and the Bing Image Search API to searches for results in parallel with an API that retrieves nutritional info on the scanned item, which is then displayed in a sleek user interface.
## Challenges We Ran Into
Optical Character Recognition is insanely tough to implement, so finding an API that did it proficiently was a challenge. We tried multiple API's but ultimately we decided on the one that suited our needs.
## Accomplishments That We're Proud Of
We are proud not only of the motivation to create this product, but how quickly we were able to implement core functionality, within a single weekend.
## What We Learned
We realized that we have the potential to make and expand our product into the wider market, and take our innovation to the next level.
## The Future
Our goals towards the future is to use this in the wider market. We realized we made something that is extensible and dynamic enough so that we would be able to analyze things in pseudo-augmented reality wherein we would be able to live display information beside it in real-time. At the Hackathon we realized that we would be to extend our software to live Optical Character Recognition tracking within Augmented Reality environments such as Microsoft Hololens, which could be used for simple user functionality, but as a new innovative way to express ad space.
|
## Inspiration
During a brainstorming session, one of our team members mentioned how they were chased by a goose while walking on campus. *Of course, we decided to take this story to its logical conclusion.*
If we could track in real-time where geese were on campus, then perhaps people could avoid getting harassed by them.
## What it does
Flock allows users to upload images of geese on campus to our server, which detects the number of geese in the photo and displays it to the user. Afterwards, Flock adds an indicator on the built-in map system to display where nearby geese are.
## How we built it
We built Flock using Flutter for the front-end; Node.JS, and Express.JS as a server and file system; Custom Vision to develop and train our object recognition model; Tensorflow to implement the model; and Google Cloud to run the server.
## Challenges we ran into
Initially, there were issues on how to integrate the camera and map correctly within the app. Later, we had trouble uploading images to our object recognition model. After solving this problem, we realized there were problems with properly drawing boxes around detected geese (so that users could confirm the number of geese detected). We weren’t sure if the model was wrong, or if the box drawing was broken.
## Accomplishments that we're proud of
We’re proud of how accurate our geese detection is, given the time constraints. The addition of boxes surrounding geese really adds a nice touch to the final product.
We had fun designing the splash page for the app, and coming up with many unique project name ideas.
The app is designed to be very simple to use, only requiring the user to take a picture or upload an image. The burden of goose detection can be left to our server!
## What we learned
Taking a “good” picture of a goose is hard. We quickly realized that using online geese pictures was easier for the training set.
Mobile app development is not as easy as it seems. Having a physical phone to test apps is much more convenient (and quicker) than emulating on a laptop.
We shifted from our original goal, as we realized trying to apply face detection on geese is not easy, and unnecessary. So we reduced a lot of our more fantastical ideas into more realistic ones.
## What's next for Flock
There should be a heat map of geese frequency to show more digestible information to users. Also, appearances of geese should gradually disappear from the live map (perhaps ranging from a few hours to a full day). Geese frequency data should be cached such that historical data is available for viewing and analysis.
We’d love to add more customizable goose pin icons - color, border, shadow, and more. Furthermore, a “goose picture of the day” could pop up when the user opens the app for the first time that day.
|
## Inspiration
Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible.
## What it does
The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner.
## How we built it
Frontend: Vue.js, tailwindCSS
Backend: Python Flask, Google Vision API, CalorieNinja API
## Challenges we ran into
As we are many first-year students, learning while developing a product within 24h is a big challenge.
## Accomplishments that we're proud of
We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals.
## What we learned
As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more.
## What's next for McHacks
* Calculate sum of calories, etc.
* Use image processing to estimate serving sizes
* Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc.
* Collaborate with local restaurant businesses
|
losing
|
## Inspiration
Climate change is a pressing global issue, primarily driven by carbon emissions. While large corporations are major contributors to this crisis, individual actions cumulatively make a significant impact. Recognizing this, GreenSlip was born out of a desire to empower individuals to monitor and manage their carbon footprint through their daily financial decisions. The platform not only aims to raise awareness about carbon emissions but also serves as a versatile tool for those keen on optimizing their financial health. By bridging the gap between ecological responsibility and financial management, GreenSlip provides a dual-purpose solution that encourages a more sustainable and economically savvy lifestyle.
## What it does
GreenSlip revolutionizes how users interact with their finances and environmental impact. Upon registering, users can upload images or PDFs of their shopping receipts and utility bills. Our platform, powered by advanced OCR and AI technologies, extracts detailed information from these documents. Users can then access intuitive dashboards that display their spending patterns alongside associated carbon emissions. This functionality allows users to not only track and manage their expenses but also compare their environmental impact over time—monthly, annually, or across all-time records. By providing these insights, GreenSlip enables users to make informed decisions aimed at reducing their carbon footprint while managing their budget.
## How we built it
* Frontend: Developed with React, our frontend delivers a dynamic and interactive user interface.
* Backend: Flask serves as our backend framework, handling API requests, data processing, and user management.
* Data Parsing: We utilize the pdf2image library to convert PDF documents into images, which are then processed using a custom-engineered OpenAI prompt to extract financial data and carbon emission details.
## Challenges we ran into
As a solo developer on this project, time was a significant constraint. Balancing the development of a fully functional frontend and backend, along with integrating complex data parsing and AI components, was challenging. Implementing all the desired features within the limited time frame proved to be difficult, but the progress made was substantial.
## Accomplishments that we're proud of
Despite the challenges, several achievements stand out:
* Aesthetic and Functional UI: The user interface is not only visually appealing but also user-friendly.
* Data Visualization: Implementing clean and informative graphs that users can interact with to filter data based on different time frames—such as monthly, yearly, or all-time.
* Impact Awareness: Enabling users to visualize where their money goes and how it impacts the environment.
## What we learned
Throughout the development of GreenSlip, it was enlightening to learn about the average carbon emissions produced by individuals and the significant impact that reducing these emissions can have on our planet. This project also enhanced my skills in full-stack development, particularly in integrating React with Flask and deploying AI-powered data extraction tools.
## What's next for GreenSlip
* E-Receipt Integration: To further reduce paper waste and the carbon footprint associated with physical receipts, integrating with digital receipt formats is a primary goal.
* Enhanced Data Insights: We plan to incorporate more detailed analytics and personalized tips on reducing both spending and carbon emissions.
GreenSlip is more than just a financial tracker; it's a movement towards a sustainable future, empowering individuals to make a difference one transaction at a time. Join us in making every slip count towards a greener planet.
|
## Inspiration
I came across a quote that deeply moved me: 'By 2050, a further 24 million children are projected to be undernourished as a result of the climate crisis.' It made me reflect profoundly on the significant impact climate change has on our world."
Melting ice caps and glaciers contribute to rising sea levels and warmer temperatures, threatening the availability of land for future generations. This inspired us to create an application that educates individuals about their carbon footprint, suggests ways to reduce it, and leverages AI technology to assist in these efforts, thereby promising a sustainable future for everyone.
## What it does
The app provides users with an overview of their carbon footprint based on their daily activities. It provides an intuitive textual response from which users can gain actionable insights.
Key features include:
* **Home Page**: Displaying a summary of the user's overall carbon footprint, along with actionable advice on reducing it.
* **Interactive Chatbot**: Users can engage in voice or text conversations with an AI-driven chatbot to gain a deeper understanding of their carbon output and receive personalized suggestions.
* **Receipt Analysis**: Users can upload images of their receipts, and the app utilizes AI to extract data and calculate the associated carbon footprint.
* **Transport Tracker**: Users input their mode of transportation along with the travel duration and distance, and the app calculates the carbon emissions for these activities.
* Users can review detailed reports and insights to identify high-emission areas and explore ways to reduce their footprint without significantly altering their lifestyle.
## How we built it
We utilized a blend of modern web technologies and advanced AI services:
* **Frontend**: Developed using HTML, CSS, and JavaScript to ensure a responsive and user-friendly interface.
* **Backend**: Implemented with Flask, which facilitates communication between the frontend and various AI models hosted on Hugging Face and other platforms.
* **AI Integration**: We integrated Amazon Bedrock for inference with LLMs like Anthropic Claude 3, OpenAI's Whisper for speech-to-text capabilities, and Moondream2 for image-to-text conversions.
* **Data Storage**: Utilized AWS S3 buckets for robust and scalable storage of images, audio files, and textual data.
## Challenges we ran into
* One significant challenge was the integration of the Whisper model for transcription, primarily due to compatibility issues with different versions of OpenAI's API. This required extensive testing and modifications to ensure stable functionality.
* Building the front end was a little complex since we mainly focused on backend and machine learning models for projects.
## Accomplishments that we're proud of
We are particularly proud of developing a tool that can significantly impact individuals' ecological footprints. By providing easy-to-understand data and actionable insights, we empower users to make informed decisions that contribute to global sustainability.
## What we learned
This project enhanced our skills in collaborative problem-solving and integrating multiple technologies into a seamless application. We gained deeper insights into the capabilities and practical applications of large language models and AI in environmental conservation.
## What's next for the project?
Looking ahead, we plan to:
* **Expand Device Integration**: Enable syncing with smartphones and wearable devices to track fitness and travel data automatically.
* **Financial Integration**: Connect with banking apps to analyze spending patterns and suggest more eco-friendly purchasing options.
* **Community Challenges**: Introduce features that allow users to participate in challenges with friends or colleagues, fostering a competitive spirit to achieve the lowest carbon footprint.
Our goal is to continually evolve the app to include more features that will not only help individuals reduce their carbon footprint but also engage communities in collective environmental responsibility.
|
## About the Project
### TLDR:
Caught a fish? Take a snap. Our AI-powered app identifies the catch, keeps track of stats, and puts that fish in your 3d, virtual, interactive aquarium! Simply click on any fish in your aquarium, and all its details — its rarity, location, and more — appear, bringing your fishing memories back to life. Also, depending on the fish you catch, reel in achievements, such as your first fish caught (ever!), or your first 20 incher. The cherry on top? All users’ catches are displayed on an interactive map (built with Leaflet), where you can discover new fishing spots, or plan to get your next big catch :)
### Inspiration
Our journey began with a simple observation: while fishing creates lasting memories, capturing those moments often falls short. We realized that a picture might be worth a thousand words, but a well-told fish tale is priceless. This spark ignited our mission to blend the age-old art of fishing with cutting-edge AI technology.
### What We Learned
Diving into this project was like casting into uncharted waters – exhilarating and full of surprises. We expanded our skills in:
* Integrating AI models (Google's Gemini LLM) for image recognition and creative text generation
* Crafting seamless user experiences in React
* Building robust backend systems with Node.js and Express
* Managing data with MongoDB Atlas
* Creating immersive 3D environments using Three.js
But beyond the technical skills, we learned the art of transforming a simple idea into a full-fledged application that brings joy and preserves memories.
### How We Built It
Our development process was as meticulously planned as a fishing expedition:
1. We started by mapping out the user journey, from snapping a photo to exploring their virtual aquarium.
2. The frontend was crafted in React, ensuring a responsive and intuitive interface.
3. We leveraged Three.js to create an engaging 3D aquarium, bringing caught fish to life in a virtual environment.
4. Our Node.js and Express backend became the sturdy boat, handling requests and managing data flow.
5. MongoDB Atlas served as our net, capturing and storing each precious catch securely.
6. The Gemini AI was our expert fishing guide, identifying species and spinning yarns about each catch.
### Challenges We Faced
Like any fishing trip, we encountered our fair share of challenges:
* **Integrating Gemini AI**: Ensuring accurate fish identification and generating coherent, engaging stories required fine-tuning and creative problem-solving.
* **3D Rendering**: Creating a performant and visually appealing aquarium in Three.js pushed our graphics programming skills to the limit.
* **Data Management**: Structuring our database to efficiently store and retrieve diverse catch data presented unique challenges.
* **User Experience**: Balancing feature-rich functionality with an intuitive, streamlined interface was a constant tug-of-war.
Despite these challenges, or perhaps because of them, our team grew stronger and more resourceful. Each obstacle overcome was like landing a prized catch, making the final product all the more rewarding.
As we cast our project out into the world, we're excited to see how it will evolve and grow, much like the tales of fishing adventures it's designed to capture.
|
losing
|
## Inspiration
We had realized that some of our favourite movies were surprisingly panned by critics, but had been very popular by general audiences. A large portion of movies had had a poor metascore, but a positive user score were considered to be cult classics, so we decided to build our own search engine and database that specifically focused on finding those films. What we built was a refreshing film database that feels more comfortable than other sites.
## What it does
Our program crawls through the metacritc website, and searches for films that were highly rated by users, and negatively rated by critics. Once a film is found, its metadata is uploaded to a database. Then, the ICMDb search engine queries the database, and returns results for the user.
## How we built it
The backend of our program was built using mySQL, phpMyAdmit and python. We have a python script that uses a publicly available metacritic API to crawl through the website and return results. Finally we used python and flask to connect the backend with an HTML and Javascript frontend.
## Challenges we ran into
* No official Meteoritic API. We had to modify community created ones to serve our needs
* Almost no experience with working creating a web server and hosting a website.
* Little knowledge on frontend web development.
* Creating an efficient web scraping/DB entry algorithm
## Accomplishments that we're proud of
* Finishing on schedule, despite little experience before with web development
* The speed and intuitiveness of the ICMDb website.
* Learning new programming concepts in a very short time span.
## What we learned
* Website scraping and crawling.
* Setting up a web server using python.
* Querying a mySQL database.
* Creating an interface so between the front and backend.
* Search engine algorithms
* Web design using HTML and CSS.
## What's next for The Internet Cult Movie Database
* Expanding the database to include even more movies.
* Implementation of a Rotten Tomatoes API.
* Front end tweaks to improve responsiveness, and layout.
* Expanding the database to include video games, music and TV shows.
|
## Inspiration
Looking for hours for a movie to watch with seemingly no end to the random trailers in sight.
## What it does
Muru curates a selection of movies based on your likes and dislikes based on an IMDb dataset that finds the cosine similarities between your inputs and other movies based on genres and keywords.
## How we built it
Our website's front and backend were created through Reflex and our dataset was collected through IMDb.
## Challenges we ran into
We had several issues connecting the front and back ends of our project together. Switching the webpage to the recommended movies page and printing out the five posters caused us the most trouble. We solved this issue by returning the movie poster and printing it in a grid.
Figuring out how to create the recommendations was pretty difficult. None of us had any experience when it came to processing data. It required a lot of research to get started working on it.
We ended up switching to a different project late Friday.
## Accomplishments that we're proud of
We are proud that we were able to create a proper project.
We are proud to create a really good algorithm that recommends movies
We are proud that we were able to work together to combine all our work
## What we learned
We learned a lot about preparing ahead of time for projects.
Some of us had to learn Python to use Reflex for our front and backend
Learned how to utilize data. (extract/refine)
Learned how to find similarities in datasets.
## What's next for Muru | The Movie Guru
We want to implement a few features.
1. An evil mode
2. Our original plan was to include this as a fun bit to find movies you wouldn't like
3. An exit/match feature
4. This would allow more similarities for the Tinder UI
5. Change how the frontend loads the movie recommendations/images
6. This would increase the speed of the website
7. Currently, it updates the whole website but we would want it to redirect the user
8. Finnaly, we would want to have it hosted and running
|
## Inspiration
College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user.
Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health.
## What it does
Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline.
The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity.
The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress.
## How we built it
The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python.
## Challenges we ran into
The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here.
## Accomplishments that we're proud of
We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack.
## What we learned
We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space.
## What's next for Clockwork
Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy.
|
losing
|
## Inspiration
After seeing the breakout success that was Pokemon Go, my partner and I were motivated to create our own game that was heavily tied to physical locations in the real-world.
## What it does
Our game is supported on every device that has a modern web browser, absolutely no installation required. You walk around the real world, fighting your way through procedurally generated dungeons that are tied to physical locations. If you find that a dungeon is too hard, you can pair up with some friends and tackle it together.
Unlike Niantic, who monetized Pokemon Go using micro-transactions, we plan to monetize the game by allowing local businesses to to bid on enhancements to their location in the game-world. For example, a local coffee shop could offer an in-game bonus to players who purchase a coffee at their location.
By offloading the cost of the game onto businesses instead of players we hope to create a less "stressful" game, meaning players will spend more time having fun and less time worrying about when they'll need to cough up more money to keep playing.
## How We built it
The stack for our game is built entirely around the Node.js ecosystem: express, socket.io, gulp, webpack, and more. For easy horizontal scaling, we make use of Heroku to manage and run our servers. Computationally intensive one-off tasks (such as image resizing) are offloaded onto AWS Lambda to help keep server costs down.
To improve the speed at which our website and game assets load all static files are routed through MaxCDN, a continent delivery network with over 19 datacenters around the world. For security, all requests to any of our servers are routed through CloudFlare, a service which helps to keep websites safe using traffic filtering and other techniques.
Finally, our public facing website makes use of Mithril MVC, an incredibly fast and light one-page-app framework. Using Mithril allows us to keep our website incredibly responsive and performant.
|
## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
|
## Inspiration
Arriving into new places always means starting all over again, including with friends and socializing. It was one lonely night when I had the idea to do something, but didn't ask anyone thinking they would be busy. Turns out they were thinking the same way too! We needed a way to communicate effectively and gather plans based on what we are up to doing, while reconnecting with some old friends in the process.
## What it does
You log in with Facebook and the app gets your friend connections who are also registered in the app. At any point you can set up a plan you want to do, maybe going for dinner at that new place, or hiking around the mountains near town. Maybe you will spend the night home and someone might want to hop in, or even you could schedule your gaming or streaming sessions for others to join you in your plan.
Maybe you don't know exactly what you want to do. Well, the inverse is also applied, you can hop in into the app and see the plans your friends have for a specific time. Just go into their post and tell them "I'm in"
## How we built it
In order to get the open access possible in a short ammount of time we implemented this as a Web Page using the MERN stack. Mongo, Express React and Node. This helps us build and deliver fast while also retaining most of the control over the control of our app. For this project in particular we tried an interesting approach in the file ordering system, emmulating the PODS system used in some frameworks or languages like Ember. This helps us group our code by entitied and also divide the workflow efficiently.
## Challenges we ran into
Because we are using the info from Facebook we frequently run into the problem and design decision of whether to cache the information or keep it flowing to maintain it updated. We want the user data to be always fresh, but this comes at a cost of multiple repeated fetches that we don't want to push into our clients. We ended up running with a mix of both, keeping the constant queries but optimizing our flow to do the least of them as possible.
## Accomplishments that we're proud of
The system in which the user friends are gathered for social communication depends heavily on the flow of the Facebook API, this was the most difficult thing to gather, especially ensuring a smooth onboarding experience in which the user would both login seamlessly with their social network, while at the same time we make all the preparations necessary for the user model to start using the app. It's kind of like a magic trick, and we learned how to juggle our cards on this one.
## What we learned
Returning to our fresh data problem, we realized the importance of determining earlier on when to normalize or not our data, seeing the tradeoffs this bring and when to use which one. Many times we rearranged code because we saw a more efficient way to build it. Knowing this from the beginning will save a lot of time in the next hackathons.
## What's next for Jalo
Make it big!! The basic functionality is already there but we can always improve upon it. By selecting which friends are going to be the ones invited to the events, setting filters and different functionalities like a specific date for responding, etc. Improving the chat is also necessary. But after all of that, make our friends use it and continue scaling it and see what more it needs to grow!
|
winning
|
## What is Search and Protect
We created a hack that can search through public twitter timeline histories of many people and determine whether they are at a risk of self harm or depression using personality profiling and sentiment analysis.
## How would this be used?
Organizations such as the local police or mental health support groups would be able to keep a close eye on those who are not in a good state of mind or having a rough time in life. Often people will express their feelings on social media due to the feeling on semi-anonymity and the fact that they can hide behind a screen, so it is possible a lot of people may be more transparent about heavy issues.
## Technical Implementation
To connect our backend to our frontend, we took full advantage of the simplicity and utility of stdlib to create numerous functions that we used at various points to perform simple tasks such as scraping a twitter timeline for texts, sending a direct message to a specific user and one to interact with the Watson sentiment/personality analysis api. In addition, we have a website set up where an administrator would be able to view information gathered.
## The future for Search and Protect
The next step would be setting up an automated bot farm that runs this project amongst relevant users. For example, a University mental support group would run it amongst the followers of their official Twitter account. It could also implement intelligent chat AI so that people can continue to talk and ask it for help even when there is nobody available in person.
|
## 💡Inspiration
* 2020 US Census survey showed that adults were 3x more likely to screen positive for depression or anxiety in 2020 vs 2019
* A 2019 review of 18 papers summarized that wearable data could help identify depression, and coupled with behavioral therapy can help improve mental health
* 1 in 5 americans owns wearables now, and this adoption is projected to grow 18% every year
* Pattrn aims to turn activity and mood data into actionable insights for better mental health.
## 🤔 What it does
* Digests activity monitor data and produces bullet point actionable summary on health status
* Allows users to set goals on health metrics, and provide daily, weekly, month review against goals
* Based on user mood rating and memo entry, deduce activities that correlates with good and bad days
[](https://postimg.cc/bd9JvX3V)
[](https://postimg.cc/bDQQJ6B0)
## 🦾 How we built it
* Frontend: ReactJS
* Backend: Flask, Google Cloud App Engine, Intersystems FHIR, Cockroach Labs DB, Cohere
## 👨🏻🤝👨🏽 Challenges / Accomplishments
* Ideating and validating took up a big chunk of this 24 hour hack
* Continuous integration and deployment, and Github collaboration for 4 developers in this short hack
* Each team member pushing ourselves to try something we have never tried before
## 🛠 Hack for Health
* Pattrn currently is able to summarize actionable steps for users to take towards a healthy lifestyle
* Apart from health goal setting and reviewing, pattrn also analyses what activities have historically correlated with "good" and "bad" days
## 🛠 Intersystems Tech Prize
* We paginated a GET and POST request
* Generated synthetic data and pushed it in 2 different time resolution (Date, Minutes)
* Endpoints used: Patient, Observation, Goals, Allergy Intolerance
* Optimized API calls in pushing payloads through bundle request
## 🛠 Cockroach Labs Tech Prize
* Spawned a serverless Cockroach Lab instance
* Saved user credentials
* Stored key mapping for FHIR user base
* Stored sentiment data from user daily text input
## 🛠 Most Creative Use of GitHub
* Implemented CICD, protected master branch, pull request checks
## 🛠 Cohere Prize
* Used sentiment analysis toolkit to parse user text input, model human languages and classify sentiments with timestamp related to user text input
* Framework designed to implement a continuous learning pipeline for the future
## 🛠 Google Cloud Prize
* App Engine to host the React app and Flask observer and linked to Compute Engine
* Hosted Cockroach Lab virtual machine
## What's next for Pattrn
* Continue working on improving sentiment analysis on user’s health journal entry
* Better understand pattern between user health metrics and daily activities and events
* Provide personalized recommendations on steps to improve mental health
* Provide real time feedback eg. haptic when stressful episode are predicted
Temporary login credentials:
Username: [[email protected]](mailto:[email protected])
Password: norcal
|
## 💡 Inspiration 💡
Mental health is a growing concern in today's population, especially in 2023 as we're all adjusting back to civilization again as COVID-19 measures are largely lifted. With Cohere as one of our UofT Hacks X sponsors this weekend, we want to explore the growing application of natural language processing and artificial intelligence to help make mental health services more accessible. One of the main barriers for potential patients seeking mental health services is the negative stigma around therapy -- in particular, admitting our weaknesses, overcoming learned helplessness, and fearing judgement from others. Patients may also find it inconvenient to seek out therapy -- either because appointment waitlists can last several months long, therapy clinics can be quite far, or appointment times may not fit the patient's schedule. By providing an online AI consultant, we can allow users to briefly experience the process of therapy to overcome their aversion in the comfort of their own homes and under complete privacy. We are hoping that after becoming comfortable with the experience, users in need will be encouraged to actively seek mental health services!
## ❓ What it does ❓
This app is a therapy AI that generates reactive responses to the user and remembers previous information not just from the current conversation, but also past conversations with the user. Our AI allows for real-time conversation by using speech-to-text processing technology and then uses text-to-speech technology for a fluent human-like response. At the end of each conversation, the AI therapist generates an appropriate image summarizing the sentiment of the conversation to give users a method to better remember their discussion.
## 🏗️ How we built it 🏗️
We used Flask to make the API endpoints in the back-end to connect with the front-end and also save information for the current user's session, such as username and past conversations, which were stored in a SQL database. We first convert the user's speech to text and then send it to the back-end to process it using Cohere's API, which as been trained by our custom data and the user's past conversations and then sent back. We then use our text-to-speech algorithm for the AI to 'speak' to the user. Once the conversations is done, we use Cohere's API to summarize it into a suitable prompt for the DallE text-to-image API to generate an image summarizing the user's conversation for them to look back at when they want to.
## 🚧 Challenges we ran into 🚧
We faced an issue with implementing a connection from the front-end to back-end since we were facing a CORS error while transmitting the data so we had to properly validate it. Additionally, incorporating the speech-to-text technology was challenging since we had little prior experience so we had to spend development time to learn how to implement it and also format the responses properly. Lastly, it was a challenge to train the cohere response AI properly since we wanted to verify our training data was free of bias or negativity, and that we were using the results of the Cohere AI model responsibly so that our users will feel safe using our AI therapist application.
## ✅ Accomplishments that we're proud of ✅
We were able to create an AI therapist by creating a self-teaching AI using the Cohere API to train an AI model that integrates seamlessly into our application. It delivers more personalized responses to the user by allowing it to adapt its current responses to users based on the user's conversation history and
making conversations accessible only to that user. We were able to effectively delegate team roles and seamlessly integrate the Cohere model into our application. It was lots of fun combining our existing web development experience with venturing out to a new domain like machine learning to approach a mental health issue using the latest advances in AI technology.
## 🙋♂️ What we learned 🙋♂️
We learned how to be more resourceful when we encountered debugging issues, while balancing the need to make progress on our hackathon project. By exploring every possible solution and documenting our findings clearly and exhaustively, we either increased the chances of solving the issue ourselves, or obtained more targeted help from one of the UofT Hacks X mentors via Discord. Our goal is to learn how to become more independent problem solvers. Initially, our team had trouble deciding on an appropriately scoped, sufficiently original project idea. We learned that our project should be both challenging enough but also buildable within 36 hours, but we did not force ourselves to make our project fit into a particular prize category -- and instead letting our project idea guide which prize category to aim for. Delegating our tasks based on teammates' strengths and choosing teammates with complementary skills was essential for working efficiently.
## 💭 What's next? 💭
To improve our project, we could allow users to customize their AI therapist, such as its accent and pitch or the chat website's color theme to make the AI therapist feel more like a personalized consultant to users. Adding a login page, registration page, password reset page, and enabling user authentication would also enhance the chatbot's security. Next, we could improve our website's user interface and user experience by switching to Material UI to make our website look more modern and professional.
|
winning
|
## Inspiration
Our initial goal is to build something that can spread positivities among people, and something fun for people to use as well.
## What it does
It reads sound from a file and analyzes the emotions implicated by the sound of the users. The panda's facial expression is going to change according to the category of the most significant emotion, and the users can see the detailed analysis of their emotions. When users express positivity when they are speaking, the score is going to increase.
## How we built it
We use Android Studio along with IBM Watson Tone Analyzer and Google Speech APIs to implement our design.
## Challenges we ran into
We had trouble enabling user to input their voices through microphones, because the APIs work mostly with high-quality audios such as those in the FLAC form.
## Accomplishments that we're proud of
Our idea and UI design are very user-friendly and intuitive. We have designed something that bears positive social meanings and fun for people to use, which are our original goals.
## What we learned
We have learned to make more efficient use of APIs through our development, and we have also enjoyed iterating on our design constantly to make it better.
## What's next for Voice Panda
We are trying to fix the microphone problem, and want to add more deterministic features to better analyze input voices' emotions, such as the decibels and frequencies of the sound. Our main goal for the near future is to build a social interaction feature that enables users to interact each other online and spread positivity through this App (specifically, if one user says something positive to the other user, their score will raise together by a larger amount than they interact with their Apps alone). The panda will also grow as its growth point increases.
|
## Inspiration
Our inspiration for this project was our personal experience of struggling to prepare for public speaking. Whether through previous hackathons, class projects, or poster presentations, we all have had trouble practicing our speeches. The anxiety of public speaking makes it difficult to want to practice with other people and has been a common obstacle preventing us from improving our speech skills. We learned that many people are too shy to seek to rehearse their speeches with others. This motivated us to create an app that provides non-judgmental feedback based on facial expressions and speech patterns, empowering individuals to practice and perfect their public speaking skills in a human-free, non-judgemental environment.
## What it does
Our project starts with a web app where the user can record their presentation in front of their camera. After their presentation concludes, a Q&A takes place with **hume's EVI** (Empathic Voice Interface) within the context of their speech and a custom prompt. Additionally, we used **Deepgram's voice platform** to transcribe the user's speech. **Groq** then parses this speech to rate the user's speaking ability and provide feedback on what they can do to improve.
## How we built it
The frontend of our web application was created using react.js as well as shadcn, a component library. The backend of our application uses hume ai as the main technology. We use the face model in their expression measurement tool to gather values for different emotions/expressions the user exhibits in their speech. These values are then evaluated by Groq which provides a rating for the user's presentation in a friendly, yet helpful manner. The other component of our project is the Q&A portion where the user is asked questions based on the topic they discussed.
## Challenges we ran into
The biggest challenge we ran into was trying to seamlessly integrate three major technologies into the project; those being hume ai, Deepgram, and Google Gemini. Learning all of these technologies on the spot was challenge enough, but finding a way to connect them all was another level of difficult. It was also a challenge testing the code as the wait times for getting output was rather slow and quickly added up, which meant we had to sparingly re-run our code.
## Accomplishments that we're proud of
We're proud of successfully integrating Hume AI and Deepgram, offering actionable feedback on both emotional expressions and speech content. Bringing together facial expression analysis and a post-speech Q&A with an empathetic voice interface (EVI) is something we believe will make practicing speeches much more interactive and insightful for users.
We're also proud of our ability to transcribe speech in real time and use Google Gemini's NLP to analyze and score it for clarity, structure, and impact. These components, working together, form the basis of a comprehensive feedback system that helps users improve their presentation skills with each use.
## What we learned
One major thing we learned was using GitHub to collaborate with backend and frontend developers simultaenously. With one teammate new to hackathons, it was crucial to learn version control for easy management of our codebase. Despite not ultimately using it, we learned about natural language processing (NLP) with Google Gemini. One more thing we learned was how to integrate an AI model into our project with hume ai's face model. This experience emphasized the critical role that emotional awareness plays in effective communication and taught us to appreciate the nuances that come with empathic interactions.
## What's next for Oratix
Moving forward, we plan to enhance our application by refining the feedback mechanism. We want to introduce real-time analysis of their face expressions as well as prosody, so they can adjust accordingly during the presentation rather than after. Another feature we want to add is combining EVI with a live avatar. This will simulate interacting with a human who might nod along or acknowledge statements from the presenter throughout their talk.
|
## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.

Tracks movement of hands and face to provide real-time analysis on expressions and body-language.

|
losing
|
## Inspiration
As many students are forced to learn remotely, teachers are also trying their best to adapt. One thing they are very limited on, however, is the administration of exams. Each day, students devise new ways to cheat, and my intent is to help prevent that. One method that teachers use for exams is having everyone turn on their camera in a Zoom call. However, in my 500+ student classes, it's very hard to track everyone, along with the fact that everyone is on mute. If there is some way to still detect cheating through a muted video, the amount of cheating will significantly decrease.
## What it does
* Uses machine learning and artificial intelligence to analyze patterns in lip movement to detect speaking.
## How I built it
* Utilizes Python along with the face\_recognition, OpenCV, os, and math packages
## Challenges I ran into
* Overestimating the amount of time allotted
* Figuring out which packages to use
* Thinking of a project
## Accomplishments that I'm proud of
* Completing an almost complete product
* Learning more about artificial intelligence
* Completing my first hackathon
## What I learned
Although the amount of time allowed seems like a lot, it goes by really fast. I didn't spend my time wisely and ended up having to rush in the end.
## What's next for SpeakerFind
I want to brush up on the speaker detection algorithm and produce more precise results. I also want to try and integrate the program into a website or app. Also, I think it would be really cool to try out this product in an actual video call platform.
|
## Inspiration
I've tutored students for many years and very early on, and it quickly became evident that these students were incredibly bright, but they just never asked for clarification in class. Often in tutoring sessions, I am stuck wondering whether a student is confused or not and oftentimes it hinders the progression of the class. I can only wonder how it must be when it is not just a one on one interaction.
## What it does
1. Automatically takes attendance through facial recognition software using only one image. For deployment purposes, the image could be just a yearbook photo which nearly everyone takes.
2. Scans the emotions and facial expressions of the students in the classroom and tells the teacher when the majority of the students are confused.
## How I built it
For the frontend, a MVVM structure was created with UIKit in swift. The backend was built with python and flask. The facial recognition model came from openCV and the emotion recognition model was the VGG19 model trained on the FER-13 dataset.
## Challenges we ran into
The first main challenge was using flask for the backend because I had never done it before but it quickly became evident that it wasn't very difficult and was quite similar to HTTP requests in JavaScript. The next main challenge was getting both AI algorithms to run in tandem with each other and produce a combined output that could eventually be extracted with the HTTP requests. The final hurdle was creating all the niche features like tracking all the total time the students are in the classroom and warning the teacher about each student that is skipping class.
## Accomplishments that we're proud of
Building a fully functioning, ready to deploy app that can be used in public schools today in under 24 hours.
## What's next for teach.ai
I plan on trying to take this company forward. This is not just some idea I had for only a hackathon, but rather, my vision is much greater. This tool would be extremely useful all accross the country and could cause extreme changes in how our future generations grow and develop their minds. I plan on building a few fully working prototypes and seeing if I can try and deploy them in the schools near my house. From there I can build a reputation and scale the company further.
|
## Inspiration
Being introduced to financial strategies, many are skeptical simply because they can't imagine a significant reward for smarter spending.
## What it does
* Gives you financial advice based on your financial standing (how many credit cards you have, what the limits are, whether you're married or single etc.)
* Shows you a rundown of your spendings separated by category (Gas, cigarettes, lottery, food etc.)
* Identifies transactions as reasonable or unnecessary
## How I built it
Used React for the most part in combination with Material UI. Charting library used is Carbon Charts which is also developed by me: <https://github.com/carbon-design-system/carbon-charts>
## Challenges I ran into
* AI
* Identification of reasonable or unnecessary transactions
* Automated advising
## Accomplishments that I'm proud of
* Vibrant UI
## What I learned
* Learned a lot about React router transitions
* Aggregating data
## What's next for SpendWise
To find a home right inside your banking application.
|
losing
|
## Inspiration
2020 had us indoors more than we'd like to admit and we turned to YouTube cooking videos for solace. From Adam Ragusea to Binging with Babish, these personalities inspired some of us to start learning to cook. The problem with following along with these videos is that you have to keep pausing the video while you cook. Or even worse, you have to watch the entire video and write down the steps if they're not provided in the video description. We wanted an easier way to summarize cooking videos into clear steps.
## What it does
Get In My Belly summarizes Youtube cooking videos into text recipes. You simply give it a YouTube link and the web app generates a list of ingredients and a series of steps for making the dish (with pictures), just like a recipe in a cook book. No more wondering where they made the lamb sauce. :eyes:
## How we built it
We used React for front-end and Flask for back-end. We used [Youtube-Transcript-API](https://pypi.org/project/youtube-transcript-api/) to convert Youtube videos to transcripts. The transcripts are filtered and parsed into the resulting recipe using Python with the help of the [Natural Language Toolkit](https://www.nltk.org/) library and various text-based, cooking-related datasets that we made by scraping our favourite cooking videos. Further data cleaning and processing was done to ensure the output included quantities and measurements alongside the ingredients. Finally, [OpenCV](https://opencv.org/) was used to extract screenshots based on time-stamps.
## Challenges we ran into
Determining the intent of a sentence is pretty difficult, especially when someone like Binging with Babish says things that range from very simple (`add one cup of water`) to semantically-complex (`to our sauce we add flour; throw in two heaping tablespoons, then add salt to taste`). We converted each line of the transcription into a Trie structure to separate out the ingredients, cooking verbs, and measurements.
## Accomplishments that we're proud of
We really like the simplicity of our web app and how clean it looks. We wanted users to be able to use our system without any instruction and we're proud of achieving this.
## What we learned
This was the first hackathon for two of our three members. We had to quickly learn how to budget our time since it's a 24-hour event. Perhaps most importantly, we gained experience in deciding when a feature was too ambitious to achieve within time constraints. For other members, it was their first exposure to web-dev and learning about Flask and React was mind boggling.
## What's next for Get In My Belly
Future changes to GIMB include a more robust parsing system and refactoring the UI to make it cleaner. We would also like to support other languages and integrate the project with other APIs to get more information about what you're cooking.
|
## Inspiration
It can be tough coming up with a unique recipe each and every week. Sometimes there are good deals for specific items (especially for university students) and determining what to cook with those ingredients may not be known. *Rad Kitchen says goodbye to last-minute trips to the grocery store and hello to delicious, home-cooked meals with the Ingredient Based Recipe Generator chrome extension.*
## What it does
Rad Kitchen is a Google Chrome extension, the ultimate tool for creating delicious recipes with the ingredients you already have on hand. This Chrome Extension is easy to install and is connected to Radish's ingredient website. By surfing and saving ingredients of interests from the Radish website, the user can store them in your personal ingredient library. The extension will then generate a recipe based on the ingredients you have saved and provide you with a list of recipes that you can make with the ingredients you already have. You can also search for recipes based on specific dietary restrictions or cuisine type. It gives a final image that shows what the dish may look like.
## How we built it
* Google Chrome extension using the React framework. The extension required a unique manifest.json file specific to Google Chrome extensions.
* Cohere NLP to take user input of different ingredients and generate a recipe.
* OpenAI's API to generate an image from text parameters. This creates a unique image to the prompt.
* Material UI and React to create an interactive website.
* Integrated Twilio to send the generated recipe and image via text message to the user. The user will input their number and Twilio API will be fetched. The goal is to create a more permanent place after the recipe is generated for people to refer to
## Challenges we ran into
* Parsing data - some issues with the parameters and confusion with objects, strings, and arrays
* Dealing with different APIs was a unique challenge (Dalle2 API was more limited)
* One of our group members could not make it to the event, so we were a smaller team
* Learning curve for creating a Chrome Extension
* Twilio API documentation
* Cohere API - Determining the best way to standardize message output while getting unique responses
## Accomplishments that we're proud of
* This was our first time building a Google Chrome extension. The file structure and specific-ness of the manifest.json file made it difficult. Manifest v3 quite different from Manifest v2.
* For this hackathon, it was really great to tie our project well with the different events we applied for
## What we learned
* How to create a Google Chrome extension. It cannot be overstated how new of an experience it was, and it is fascinating how useful a chrome extension can be as a technology
* How to do API calls and the importance of clear function calls
## What's next for Rad Kitchen
* Pitching and sharing the technology with Radish's team at this hackathon
|
## Inspiration
In the work from home era, many are missing the social aspect of in-person work. And what time of the workday most provided that social interaction? The lunch break. culina aims to bring back the social aspect to work from home lunches. Furthermore, it helps users reduce their food waste by encouraging the use of food that could otherwise be discarded and diversifying their palette by exposing them to international cuisine (that uses food they already have on hand)!
## What it does
First, users input the groceries they have on hand. When another user is found with a similar pantry, the two are matched up and displayed a list of healthy, quick recipes that make use of their mutual ingredients. Then, they can use our built-in chat feature to choose a recipe and coordinate the means by which they want to remotely enjoy their meal together.
## How we built it
The frontend was built using React.js, with all CSS styling, icons, and animation made entirely by us. The backend is a Flask server. Both a RESTful API (for user creation) and WebSockets (for matching and chatting) are used to communicate between the client and server. Users are stored in MongoDB. The full app is hosted on a Google App Engine flex instance and our database is hosted on MongoDB Atlas also through Google Cloud. We created our own recipe dataset by filtering and cleaning an existing one using Pandas, as well as scraping the image URLs that correspond to each recipe.
## Challenges we ran into
We found it challenging to implement the matching system, especially coordinating client state using WebSockets. It was also difficult to scrape a set of images for the dataset. Some of our team members also overcame technical roadblocks on their machines so they had to think outside the box for solutions.
## Accomplishments that we're proud of.
We are proud to have a working demo of such a complex application with many moving parts – and one that has impacts across many areas. We are also particularly proud of the design and branding of our project (the landing page is gorgeous 😍 props to David!) Furthermore, we are proud of the novel dataset that we created for our application.
## What we learned
Each member of the team was exposed to new things throughout the development of culina. Yu Lu was very unfamiliar with anything web-dev related, so this hack allowed her to learn some basics of frontend, as well as explore image crawling techniques. For Camilla and David, React was a new skill for them to learn and this hackathon improved their styling techniques using CSS. David also learned more about how to make beautiful animations. Josh had never implemented a chat feature before, and gained experience teaching web development and managing full-stack application development with multiple collaborators.
## What's next for culina
Future plans for the website include adding videochat component to so users don't need to leave our platform. To revolutionize the dating world, we would also like to allow users to decide if they are interested in using culina as a virtual dating app to find love while cooking. We would also be interested in implementing organization-level management to make it easier for companies to provide this as a service to their employees only. Lastly, the ability to decline a match would be a nice quality-of-life addition.
|
partial
|
## Inspiration
We express emotions in our everyday lives when we communicate with our loved ones, our neighbors, our friends, our local Loblaw store customer service, our doctors or therapists. These emotions can be examined by cues such as gesture, text and facial expressions. The goal of Emotional.AI is to provide a tool for businesses (customer service, etc), or doctors/therapists to identify emotions and enhance their services.
## What it does
Uses natural language processing (from audio transcription via Assembly AI) and computer vision to determine emotion of people.
## How we built it
#### **Natural Language processing**
* First we took emotion classified data from public sources (Kaggle and research studies).
* We preprocessed, cleaned, transformed, created features, and performed light EDA on the dataset.
* Used TF-IDF tokenizer to deal with numbers, punctuation marks, non letter symbols, etc.
* Scaled the data using Robust Scaler and made 7 models. (MNB, Linear Regression, KNN, SVM, Decision Tree, Random Forrest, XGB)
#### **Computer Vision**
Used Mediapipe to generate points on face, then use those points to get training data set. We used Jupyter Notebook to run OpenCV and Mediapipe. Upon running our data in Mediapipe, we were able to get a skeleton map of the face with 468 points. These points can be mapped in 3-dimension as it contains X, Y, and Z axis. We processed these features (468 points x 3) by saving them into a spreadsheet. Then we divided the spreadsheet into training and testing data. Using the training set, we were able to create 6 Machine learning models and choose the best one.
#### **Assembly AI**
We converted video/audio from recordings (whether it’s a therapy session or customer service audio from 1000s of Loblaws customers 😉) to text using Assembly API.
#### **Amazon Web Services**
We used the S3 services to host the video files uploaded by the user. These video files were then sent the Assembly AI Api.
#### **DCP**
For Computing (ML)
## Challenges we ran into
* Collaborating virtually is challenging
* Deep learning training takes a lot of computing power and time
* Connecting our front-end with back-end (and ML)
* Time management
* Working with react + flask server
* Configuring amazon buckets and users to make the app work with the s3 services
## Accomplishments that we're proud of
Apart from completing this hack, we persevered through each challenge as a team and succeeded in what we put ourselves up to.
## What we learned
* Working as a team
* Configuration management
* Working with Flask
## What's next for Emotional.AI
* We hope to have a more refined application with cleaner UI.
* We want to train our models further with more data and have more classifications.
* We want to make a platform for therapists to connect with their clients and use our tech.
* Make our solution work in real-time.
|
## AI, AI, AI...
The number of projects using LLMs has skyrocketed with the wave of artificial intelligence. But what if you *were* the AI, tasked with fulfilling countless orders and managing requests in real time? Welcome to chatgpME, a fast-paced, chaotic game where you step into the role of an AI who has to juggle multiple requests, analyzing input, and delivering perfect responses under pressure!
## Inspired by games like Overcooked...
chatgpME challenges you to process human queries as quickly and accurately as possible. Each round brings a flood of requests—ranging from simple math questions to complex emotional support queries—and it's your job to fulfill them quickly with high-quality responses!
## How to Play
Take Orders: Players receive a constant stream of requests, represented by different "orders" from human users. The orders vary in complexity—from basic facts and math solutions to creative writing and emotional advice.
Process Responses: Quickly scan each order, analyze the request, and deliver a response before the timer runs out.
Get analyzed - our built-in AI checks how similar your answer is to what a real AI would say :)
## Key Features
Fast-Paced Gameplay: Just like Overcooked, players need to juggle multiple tasks at once. Keep those responses flowing and maintain accuracy, or you’ll quickly find yourself overwhelmed.
Orders with a Twist: The more aware the AI becomes, the more unpredictable it gets. Some responses might start including strange, existential musings—or it might start asking you questions in the middle of a task!
## How We Built It
Concept & Design: We started by imagining a game where the player experiences life as ChatGPT, but with all the real-time pressure of a time management game like Overcooked. Designs were created in Procreate and our handy notebooks.
Tech Stack: Using Unity, we integrated a system where mock requests are sent to the player, each with specific requirements and difficulty levels. A template was generated using defang, and we also used it to sanitize user inputs. Answers are then evaluated using the fantastic Cohere API!
Playtesting: Through multiple playtests, we refined the speed and unpredictability of the game to keep players engaged and on their toes.
|
## Inspiration
TV remotes are ubiquitous. Even more than smartphones, they live in all of our homes, but have largely been abandoned due to the advent of Wifi and Bluetooth.
We challenged ourselves to build a hardware hack that anyone would be able use at home using just off-the-shelf parts and without tinkering experience. Bringing TV remotes into the 21st century fit that goal perfectly.
## What it does
Control anything on your laptop by pointing a TV remote at it!
Our app translates gestures into common actions. These actions are context-aware and specific to whatever app you're using. We've added support for many common apps, but extending this to your favorite app takes just seconds using our configuration system.
## How we built it
We took advantage of the fact that laptop webcams can detect infrared (IR) light in addition to the visible light spectrum. We use OpenCV to identify the IR light coming from TV remotes and apply gesture recognition to this data to turn movements into human-understandable gestures.
|
winning
|
## Inspiration
Throughout the 21st century, millions upon millions of people, especially seniors, have lost their lives from medication misuse or missed medicine. According to a study conducted by John Hopkins University, in the United States alone, annually 250 000 people pass away due to medication misuse, missed medications and medical errors. After looking at some of the shocking statistics collected around the world regarding medication misuse, we decided to take action and create something that will prevent this issue altogether. That, is where the idea of Remedy originated
## What it does
Remedy is a dispenser/medication manager that keeps track of your medication schedule and gives you your medication accordingly. Remedy uses artificial intelligence to recognize users by their faces. When the dispenser recognizes a user, and the user indeed has a medication planned at that time, the dispense system activates and dispenses the exact amount of medication needed. The dispenser features numerous different medication types that are swappable through the app, and an easy-to-use hardware and software system. A caretaker/helper can set the time, medication type and amount for another person. This person may be a young child, a senior, someone with Alzheimer's, a visually impaired person, or just someone who doesn’t want to risk missing medications. Once the user or caretaker has input the medication information, this then gets uploaded to the Remedy Dispenser Box. After spending 5 minutes adding medication types and uploading, your work is officially done, Remedy will handle the rest! At this point, the Remedy Box activates facial recognition and dispenses your medication according to your schedule. This entire process of recognition and dispensing happens in under 5 seconds. In fact, Remedy is so simple that the user doesn’t even have to touch the machine, all they need to do is stand in front of the camera! On top of this, Remedy also includes an AI voice system to accommodate people with visual impairment. Remedy also includes an LCD panel that displays prompts to accommodate people with hearing impairment.
## How we built it
For the desktop application, we created a responsive graphical user interface using PyQt. This application is used when the user is inputting their medicine schedules. The application then saves the schedules onto a database. A serial port communication is utilized to transfer the user schedules to the raspberry pi. We implemented face recognition on the raspberry pi using OpenCV to make sure that the Remedy Dispenser can identify users properly. The raspberry pi also has a voice system where we used the google text to speech API along with the Google translate API for multiple language support.
On the hardware side, we utilized two Arduinos, one to drive the bright LCD and the other to drive the dispense and rotation servos. To control those Arduinos we also decided to use a raspberry pi 4 inside the box. On the raspberry pi, we feature a facial recognition AI system that allows the raspberry pi to determine the user, which allows the hardware to give them their medicine accordingly.
## Challenges we ran into
We ran into many challenges for both the hardware and the software. For the hardware, the biggest issue we ran into was with the dispensing and spinning mechanism. During testing, a lot of the glue that we used got weakened, so the entire system lost its strength over time. We had to rebuild a lot of the parts again, including using multiple 3d printers to complete everything in time. Other than that, the LCD had a broken resistor, and it was in SMD format, so we had to get a new one and carefully solder it on again. This is our first time working with SMD components as well, so it took multiple tries but everything worked out in the end.
On the software side, one of the biggest challenges was making the program compatible with both raspberry pi and windows. We had to make modifications to many parts of the code to get everything working. This was especially challenging since none of us had ever worked with a raspberry pi before. Other than that, implementing a proper face recognition tool and establishing/implementing a serial communication system protocol was also a challenge.
## Accomplishments that we're proud of
We are very proud to have a fully functioning minimum viable product. We ended up learning many new technologies and overcoming challenges, but in the end we are proud to have accomplished our goal.
## What we learned
Many of the libraries and frameworks we used for this project are tools we were using for the first time. As a result, we ended up learning quite a bit during this Hackathon. Some examples are how to utilize OpenCV for face recognition, how to create a viable dispensing system, how to use PyQt to create fully responsive desktop applications, how to create a database for custom data types/images, and how to create python programs for the raspberry pi ecosystem.
## What's next for Remedy
We are very proud of Remedy, and after Hack the 6ix, we hope to expand this idea further until we end up with a polished product that can potentially help millions across the world. Although we had very limited time to complete this project, we had a whole list of ideas that we wanted to implement. One day, we hope to make all of these ideas a reality. Some of these ideas that came to mind were:
\*An advanced notification system through wearable technology, mobile applications and desktop applications (a vibration watch to notify when someone has medication, mobile and desktop notifications etc.)
* Voice recognition system that allows the user to communicate with the machine
* Android and IOS applications
* Alarm feature to notify tampering of the machine
* A feature that gives notifications (and potentially contacts an emergency line) when the person has not been taking their medications for an extended period of time
|
## Inspiration
Taking medication or supplements is a necessary and vital aspect of life, but it can be tedious to count and sort out each type of medication on a weekly or even daily basis. The idea behind pill-e originally stemmed from the fact that many of our peers needed to make frequent visits to their elderly relatives' places to sort out their medication for the upcoming week. Residents at senior homes and hospitals may also require additional support with taking medication, especially for those who experience Alzheimer's or other types of dementia, decreasing the time that nurses and caretakers have to manage other issues.
We wanted to design a user-friendly solution that allows the medication-taker more independence and freedom to manage their weekly dosages, leaving more time for both them and any potential caretakers to focus on other aspects of life!
## What it does
Paired with an app interface, pill-e can be set up in just a few button clicks, where the user can indicate which pills should be taken on which days and times. The output of the pills is controlled using customized pill filtering containers powered by continuous servo motors and an LED-photoresistor setup.
A camera with a view of the tray of dispensed pills is integrated into pill-e for the purpose of connecting with the app, providing an option for caretakers to keep an eye on the status of the medication ingestion if they so choose. The screen at the top of pill-e features an animated smiley face, providing a friendly element to the user experience.
## How we built it
All mechanical components were designed and iterated upon using Solidworks, and produced via rapid prototyping. These were integrated with a Raspberry Pi, an Arduino, a breadboard, photoresistor, LED, display screen, servo motors, and a camera to provide the functionalities of pill-e mentioned above. The app was developed and integrated with the hardware components of pill-e via Python and HTML/CSS, specifically Flask and Bootstrap. We stored the user's requirements for the pill dispenser in the app to use along side the mechanical components.
## Challenges we ran into
Getting the camera to work, getting the display to work, developing a reliable method of outputting only one pill at a time, getting the app to connect to the hardware, saving data from the form... getting everything to work basically :)
## Accomplishments that we're proud of
Getting the pill filtering, camera, display, app aesthetic, and all the other hardware to work!
Storing the user's form response through refreshes!
## What we learned
How to use Flask and Python to integrate an app with hardware, and how to program various hardware components.
## What's next for pill-e
Developing more slots for pill sorting containers, and introducing swappable containers for different pill shapes and sizes. Reduce the product size and refine the physical design. Fully integrate the app with the hardware. As well, finding an amplifier or op-amps so we actually have enough output to produce audio reminders. Furthermore, making the app more dynamic to accommodate for more pills.
|
# Pose-Bot
### Inspiration ⚡
**In these difficult times, where everyone is forced to work remotely and with the mode of schools and colleges going digital, students are
spending time on the screen than ever before, it not only affects student but also employees who have to sit for hours in front of the screen. Prolonged exposure to computer screen and sitting in a bad posture can cause severe health problems like postural dysfunction and affect one's eyes. Therefore, we present to you Pose-Bot**
### What it does 🤖
We created this application to help users maintain a good posture and save from early signs of postural imbalance and protect your vision, this application uses a
image classifier from teachable machines, which is a **Google API** to detect user's posture and notifies the user to correct their posture or move away
from the screen when they may not notice it. It notifies the user when he/she is sitting in a bad position or is too close to the screen.
We first trained the model on the Google API to detect good posture/bad posture and if the user is too close to the screen. Then integrated the model to our application.
We created a notification service so that the user can use any other site and simultaneously get notified if their posture is bad. We have also included **EchoAR models to educate** the children about the harms of sitting in a bad position and importance of healthy eyes 👀.
### How We built it 💡
1. The website UI/UX was designed using Figma and then developed using HTML, CSS and JavaScript.Tensorflow.js was used to detect pose and JavaScript API to send notifications.
2. We used the Google Tensorflow.js API to train our model to classify user's pose, proximity to screen and if the user is holding a phone.
3. For training our model we used our own image as the train data and tested it in different settings.
4. This model is then used to classify the users video feed to assess their pose and detect if they are slouching or if they are too close too screen or are sitting in a generally a bad pose.
5. If the user sits in a bad posture for a few seconds then the bot sends a notificaiton to the user to correct their posture or move away from the screen.
### Challenges we ran into 🧠
* Creating a model with good acccuracy in a general setting.
* Reverse engineering the Teachable Machine's Web Plugin snippet to aggregate data and then display notification at certain time interval.
* Integrating the model into our website.
* Embedding EchoAR models to educate the children about the harms to sitting in a bad position and importance of healthy eyes.
* Deploying the application.
### Accomplishments that we are proud of 😌
We created a completely functional application, which can make a small difference in in our everyday health. We successfully made the applicaition display
system notifications which can be viewed across system even in different apps. We are proud that we could shape our idea into a functioning application which can be used by
any user!
### What we learned 🤩
We learned how to integrate Tensorflow.js models into an application. The most exciting part was learning how to train a model on our own data using the Google API.
We also learned how to create a notification service for a application. And the best of all **playing with EchoAR models** to create a functionality which could
actually benefit student and help them understand the severity of the cause.
### What's next for Pose-Bot 📈
#### ➡ Creating a chrome extension
So that the user can use the functionality on their web browser.
#### ➡ Improve the pose detection model.
The accuracy of the pose detection model can be increased in the future.
#### ➡ Create more classes to help students more concentrate.
Include more functionality like screen time, and detecting if the user is holding their phone, so we can help users to concentrate.
### Help File 💻
* Clone the repository to your local directory
* `git clone https://github.com/cryptus-neoxys/posture.git`
* `npm i -g live-server`
* Install live server to run it locally
* `live-server .`
* Go to project directory and launch the website using live-server
* Voilla the site is up and running on your PC.
* Ctrl + C to stop the live-server!!
### Built With ⚙
* HTML
* CSS
* Javascript
+ Tensorflow.js
+ Web Browser API
* Google API
* EchoAR
* Google Poly
* Deployed on Vercel
### Try it out 👇🏽
* 🤖 [Tensorflow.js Model](https://teachablemachine.withgoogle.com/models/f4JB966HD/)
* 🕸 [The Website](https://pose-bot.vercel.app/)
* 🖥 [The Figma Prototype](https://www.figma.com/file/utEHzshb9zHSB0v3Kp7Rby/Untitled?node-id=0%3A1)
### 3️⃣ Cheers to the team 🥂
* [Apurva Sharma](https://github.com/Apurva-tech)
* [Aniket Singh Rawat](https://github.com/dikwickley)
* [Dev Sharma](https://github.com/cryptus-neoxys)
|
losing
|
## Inspiration
With caffeine being a staple in almost every student’s lifestyle, many are unaware when it comes to the amount of caffeine in their drinks. Although a small dose of caffeine increases one’s ability to concentrate, higher doses may be detrimental to physical and mental health. This inspired us to create The Perfect Blend, a platform that allows users to manage their daily caffeine intake, with the aim of preventing students from spiralling down a coffee addiction.
## What it does
The Perfect Blend tracks caffeine intake and calculates how long it takes to leave the body, ensuring that users do not consume more than the daily recommended amount of caffeine. Users can add drinks from the given options and it will update on the tracker. Moreover, The Perfect Blend educates users on how the quantity of caffeine affects their bodies with verified data and informative tier lists.
## How we built it
We used Figma to lay out the design of our website, then implemented it into Velo by Wix. The back-end of the website is coded using JavaScript. Our domain name was registered with domain.com.
## Challenges we ran into
This was our team’s first hackathon, so we decided to use Velo by Wix as a way to speed up the website building process; however, Wix only allows one person to edit at a time. This significantly decreased the efficiency of developing a website. In addition, Wix has building blocks and set templates making it more difficult for customization. Our team had no previous experience with JavaScript, which made the process more challenging.
## Accomplishments that we're proud of
This hackathon allowed us to ameliorate our web design abilities and further improve our coding skills. As first time hackers, we are extremely proud of our final product. We developed a functioning website from scratch in 36 hours!
## What we learned
We learned how to lay out and use colours and shapes on Figma. This helped us a lot while designing our website. We discovered several convenient functionalities that Velo by Wix provides, which strengthened the final product. We learned how to customize the back-end development with a new coding language, JavaScript.
## What's next for The Perfect Blend
Our team plans to add many more coffee types and caffeinated drinks, ranging from teas to energy drinks. We would also like to implement more features, such as saving the tracker progress to compare days and producing weekly charts.
|
## Inspiration
As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare.
Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers.
## What it does
greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria.
## How we built it
Designs in Figma, Bubble for backend, React for frontend.
## Challenges we ran into
Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.)
## Accomplishments that we're proud of
Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners!
## What we learned
In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project.
## What's next for greenbeans
Lots to add on in the future:
Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches.
Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
|
## Inspiration
Inspired by mental health needs and the popular app BeReal, we thought it was important for users to have a space to look inwards and reflect on their feelings and support themselves.
## What it does
It prompts users to say how they're doing and complete one self care activity. Once that is completed, we have a large range of other activities available to browse.
## How we built it
We used the Android Firebase hackpack to get started, working in Android Studio with java and xml files. We did everything from mental health research to fullstack development.
## Challenges we ran into
Setting up the necessary tools was a large barrier coming from different platforms. Android Studio was also a learning curve since we are both complete add dev beginners, and had never used any similar IDE.
## Accomplishments that we're proud of
Creating a finished product that's straight-forward yet effective and has the potential to help people much like ourselves.
## What we learned
We learned about the full process of brainstorming ideas, conceptualizing a product, and implementing those ideas into a completed interface.
## What's next for HAY (How Are You?)
We'd love to do more research and include accessible citations for those sources, and make the UI more engaging and easy to use. We'd like to add more tools for users such as goal tracking and achievements for continued self care.
|
winning
|
## Inspiration
Trump's statements include some of the most outrageous things said recently, so we wanted to see whether someone could distinguish between a fake statement and something Trump would say.
## What it does
We generated statements using markov chains (<https://en.wikipedia.org/wiki/Markov_chain>) that are based off of the things trump actually says. To show how difficult it is to distinguish between the machine generated text and the real stuff he says, we made a web app to test whether someone could determine which was the machine generated text (Drumpf) and which was the real Trump speech.
## How we built it
python+regex for parsing Trump's statementsurrent tools you use to find & apply to jobs?
html/css/js frontend
azure and aws for backend/hosting
## Challenges we ran into
Machine learning is hard. We tried to use recurrent neural networks at first for speech but realized we didn't have a large enough data set, so we switched to markov chains which don't need as much training data, but also have less variance in what is generated.
We actually spent a bit of time getting <https://github.com/jcjohnson/torch-rnn> up and running, but then figured out it was actually pretty bad as we had a pretty small data set (<100kB at that point)
Eventually got to 200kB and we were half-satisfied with the results, so we took the good ones and put it up on our web app.
## Accomplishments that we're proud of
First hackathon we've done where our front end looks good in addition to having a decent backend.
regex was interesting.
## What we learned
bootstrap/javascript/python to generate markov chains
## What's next for MakeTrumpTrumpAgain
scrape trump's twitter and add it
get enough data to use a neural network
dynamically generate drumpf statements
If you want to read all the machine generated text that we deemed acceptable to release to the public, open up your javascript console, open up main.js. Line 4599 is where the hilarity starts.
|
## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
|
## Inspiration & Instructions
We wanted to somehow guilt people into realizing the state of their bank accounts by showing them progressive picture reminders as their wallpaper. Hopefully, the people who use our app will want to save more and also maybe increase their earnings by investing in stocks, SPROUTING personal monetary growth.
To use our app, you can simply install it on your phone. The APK link is below, and it is fully functional. When you first open Sprout, we ask for your bank account information. We then take you to the next screen which will show your current balance and let you set your baseline and goal amounts for your balance. Below that is the current status of your representative plant’s health based on these amounts. Be sure to check the toggle to change the wallpaper of your phone to the plant so that you’re always aware! You can also navigate to a “How To Invest” page from the menu where you can get up-to-date analytical estimations of how you could earn more money through investing.
For a detailed demo, please see our video.
## What it does
Sprout is an Android app to help students and the general populace know how their bank account is doing. It basically sees your current balance, takes the minimum threshold you don’t want your balance to go under and what you’d love to see your balance to be above. Then, the app shows you a cute plant representing the state of your bank account, either living comfortably, living luxuriously, or dying miserably. It will update your phone background accordingly so that you would be able to know at all times. You can also get to a “How To Invest” page, which can briefly educate you on how you could earn more money through investing.
## How we built it
Two of us had experience with Android Development, so we decided we wanted to make an Android app. We used Android Studio as our IDE and Java as our language of choice. (For our plant designs, we used Adobe Illustrator.) To simulate information about a possible user’s account balance, we used the MINT API to fetch financial data. In order to incentivize our users to maybe invest their savings, we used the NASDAQ API to get stock information and used that to project earnings from the user’s balance had they invested some of it in the past. We offer some brief advice on how to start investing for beginners as well.
## Challenges we ran into
Random small bugs, but we squashed the majority of them. Our biggest problem was thinking of a good idea we would be able to implement well in the time that we had!
## Accomplishments that we're proud of
Our app has many features and a great design!
## What we learned
We can get a lot done in a short amount of time :^D
## What's next for Sprout?
Background app refresh to automatically check as transactions come in so that the most accurate plant can be shown.
## Built With
* Java
* Android Studio
* NASDAQ API
* Mint API
* Adobe Illustrator (for Designs)
## Try it out
Link to APK: <https://github.com/zoedt/yhack-2016/blob/master/app-debug.apk>
|
partial
|
## Inspiration
To spread the joy of swag collecting.
## What it does
A Hack the North Simulator, specifically of the Sponsor Bay on the second floor. The player will explore sponsor booths, collecting endless amounts of swag along the way. A lucky hacker may stumble upon the elusive, exclusive COCKROACH TROPHY, or a very special RBC GOLD PRIZE!
## How we built it
Unity, Aseprite, Cakewalk
## What we learned
We learned the basics of Unity and git, rigidbody physics, integrating audio into Unity, and the creation of game assets.
|
## Inspiration
We wanted to make the interactions with our computers more intuitive while giving people with special needs more options to navigate in the digital world. With the digital landscape around us evolving, we got inspired by scenes in movies featuring Tony Stark, where he interacts with computers within his high-tech office. Instead of using a mouse and computer, he uses hand gestures and his voice to control his work environment.
## What it does
Instead of a mouse, Input/Output Artificial Intelligence, or I/OAI, uses a user's webcam to move their cursor to where their face OR hand is pointing towards through machine learning.
Additionally, I/OAI allows users to map their preferred hand movements for commands such as "click", "minimize", "open applications", "navigate websites", and more!
I/OAI also allows users to input data using their voice, so they don't need to use a keyboard and mouse. This increases accessbility for those who don't readily have access to these peripherals.
## How we built it
Face tracker -> Dlib
Hand tracker -> Mediapipe
Voice Recognition -> Google Cloud
Graphical User Interface -> tkinter
Mouse and Keyboard Simulation -> pyautogui
## Challenges we ran into
Running this many programs at the same time slows it down considerably, we therefore need to selectively choose which ones we wanted to keep during the implementation. We solved this by using multithreading and carefully investigating efficiency.
We also had a hard time mapping the face because of the angles of rotation of the head, increasing the complexity of the matching algorithm.
## Accomplishments we're proud of
We were able to implement everything we set out to do in a short amount of time, as there was a lot of integrations with multiple frameworks and our own algorithms.
## What we learned
How to use multithreading for multiple trackers, using openCV for easy camera frames, tkinter GUI building and pyautogui for automation.
## What's next for I/OAI
We need to figure a way to incorporate features more efficiently or get a supercomputer like Tony Stark!
By improving on the features, people will have more accessbility at their computers by simply downloading a program instead of buying expensive products like an eyetracker.
|
## 💡 Our Mission
Create an intuitive game but tough game that gets its players to challenge their speed & accuracy. We wanted to incorporate an active element to the game so that it can be played guilt free!
## 🧠 What it does
It shows a sequence of scenes before beginning the game, including the menu and instructions. After a player makes it past the initial screens, the game begins where a wall with a cutout starts moving towards the player. The player can see both the wall and themselves positioned on the environment, as the wall appears closer, the player must mimic the shape of the cutout to make it past the wall. The more walls you pass, the faster and tougher the walls get. The highest score with 3 lives wins!
## 🛠️ How we built it
We built the model to detect the person with their webcam using Movenet and built a custom model using Angle Heuristics to estimate similarity between users and expected pose. We built the game using React for the front end, designed the scenes and assets and built the backend using python flask.
## 🚧 Challenges we ran into
We were excited about trying out Unity, so we spent a around 10-12 trying to work with it. However, it was a lot more complex than we initially thought, and decided to pivot to building the UI using react towards the end of the first day. Although we became lot more familiar with working with Unity, and the structure of 2D games, it proved to be more difficult than we anticipated and had to change our gameplan to build a playable game.
## 🏆 Accomplishments that we're proud of
Considering that we completely changed our tech stack at around 1AM on the second day of hacking, we are proud that we built a working product in a extremely tight timeframe.
## 📚What we learned
This was the first time working with Unity for all of us. We got a surface level understanding of working with Unity and how game developers structure their games. We also explored graphic design to custom design the walls. Finally, working with an Angles Heuristics model was interesting too.
## ❓ What's next for Wall Guys
Next steps would be improve the UI and multiplayer!
|
winning
|
## Inspiration
Minecraft has an interesting map mechanic where your character holds a map which "draws itself" while exploring the world. I am also very interested in building a plotter, which is a printer that uses a pen and (XY) gantry to produce images. These ideas seemed to fit together quite well.
## What it does
Press a button, copy GPS coordinates and run the custom "gcode" compiler to generate machine/motor driving code for the arduino. Wait around 15 minutes for a 48 x 48 output.
## How we built it
Mechanical assembly - Tore apart 3 dvd drives and extracted a multitude of components, including sled motors (linear rails). Unfortunately, they used limit switch + DC motor rather than stepper, so I had to saw apart the enclosure and **glue** in my own steppers with a gear which (you guessed it) was also glued to the motor shaft.
Electronics - I designed a simple algorithm to walk through an image matrix and translate it into motor code, that looks a lot like a video game control. Indeed, the stepperboi/autostepperboi main source code has utilities to manually control all three axes like a tiny claw machine :)
U - Pen Up
D - Pen Down
L - Pen Left
R - Pen Right
Y/T - Pen Forward (top)
B - Pen Backwards (bottom)
Z - zero the calibration
O - returned to previous zeroed position
## Challenges we ran into
* I have no idea about basic mechanics / manufacturing so it's pretty slipshod, the fractional resolution I managed to extract is impressive in its own right
* Designing my own 'gcode' simplification was a little complicated, and produces strange, pointillist results. I like it though.
## Accomplishments that we're proud of
* 24 hours and a pretty small cost in parts to make a functioning plotter!
* Connected to mapbox api and did image processing quite successfully, including machine code generation / interpretation
## What we learned
* You don't need to take MIE243 to do low precision work, all you need is superglue, a glue gun and a dream
* GPS modules are finnicky and need to be somewhat near to a window with built in antenna
* Vectorizing an image is quite a complex problem
* Mechanical engineering is difficult
* Steppers are *extremely* precise, and I am quite surprised at the output quality given that it's barely held together.
* Iteration for mechanical structure is possible, but difficult
* How to use rotary tool and not amputate fingers
* How to remove superglue from skin (lol)
## What's next for Cartoboy
* Compacting the design it so it can fit in a smaller profile, and work more like a polaroid camera as intended. (Maybe I will learn solidworks one of these days)
* Improving the gcode algorithm / tapping into existing gcode standard
|
## Inspiration
Too many times have broke college students looked at their bank statements and lament on how much money they could've saved if they had known about alternative purchases or savings earlier.
## What it does
SharkFin helps people analyze and improve their personal spending habits. SharkFin uses bank statements and online banking information to determine areas in which the user could save money. We identified multiple different patterns in spending that we then provide feedback on to help the user save money and spend less.
## How we built it
We used Node.js to create the backend for SharkFin, and we used the Viacom DataPoint API to manage multiple other API's. The front end, in the form of a web app, is written in JavaScript.
## Challenges we ran into
The Viacom DataPoint API, although extremely useful, was something brand new to our team, and there were few online resources we could look at We had to understand completely how the API simplified and managed all the APIs we were using.
## Accomplishments that we're proud of
Our data processing routine is highly streamlined and modular and our statistical model identifies and tags recurring events, or "habits," very accurately. By using the DataPoint API, our app can very easily accept new APIs without structurally modifying the back-end.
## What we learned
## What's next for SharkFin
|
## Inspiration
Sometimes in lecture you need to point to something tiny on the presentation but no one really knows what you're pointing to. So we decided to build something that can read where you are pointing using a camera, and points a laser in that direction, which makes engagement in lectures and presentations much more accessible. We also realized that this idea actually branches off into a lot of other potential accessibility applications; it allows robotic control with pure human actions as input. For example, it could help artists paint large canvases by painting at where they point, or even be used as a new type of remote control if we replaced the laser with an RF signal.
## What it does
It tracks where the user's forearm is using a fully custom-built computer vision object detection program. All dimensions of the forearm are approximated via only one camera and it is able to generate a set of projected XY coordinates on the supposed presentation for the laser to point to. This corresponds to where the user is pointing at.
## How we built it
We heavily used OpenCV in Python to built the entire computer vision framework, which was tied to a USB webcam. Generated projection points were sent to an ESP32 via wifi, which fed separate coordinates to a dual-servo motor system which then moves the laser pointer to the correct spot. This was done using arduino.
## Challenges we ran into
First of all, none of us had actually used OpenCV on a project this size, especially not for object tracking. This took a lot of learning on the spot, online tutorials and experimenting
There were also plenty of challenges that all revolved around the robustness of the system. Sometimes the contour detection software would detect multiple contours, so a challenge was finding a way to join them so the system wouldn't break. The projection system was quite off at the start, so a lot of manual tuning had to be done to fix that. The wifi data transmission also took a long time to figure out, as none of us had ever touched that stuff before.
## Accomplishments that we're proud of
We're quite proud of the fact that we were able to build a fully functional object tracking system without any premade online code in such a short amount of time, and how robust it was in action. It was also quite cool to see the motors react in real time to user input.
## What we learned
We learned some pretty advanced image processing and video capture techniques in OpenCV, and how to use the ESP32 controller to do stuff.
## What's next for Laser Larry
The biggest step is to make the projection system more accurate, as this would take a lot more tuning. Another camera also wouldn't hurt to get more accurate readings, and it would be cool to expand the idea to more accessibility applications discussed above.
|
winning
|
## Inspiration
In high school, I babysat a 5 year old named Alejandro. Alejandro was one of the **76,000,000** people in 2018 that has been diagnosed on the Autism Spectrum. Autism affects each person differently, which is why it is a spectrum, and for Alejandro, this means he is **non-verbal**.
If he wanted food he would point and make sounds, but never would he make a full sentence. If we were lucky we'd hear "pizza." Alejandro would always spend the day tired and exhausted, even just after waking up. His mom was so concerned that she took him to see many specialists. It wasn't until the 4th or 5th "second opinion" that they discovered that Alejandro was suffering from epileptic seizures throughout the night.
So today, we've created **hoot**.
## What it does
Many Americans suffer from several kinds of sleeping disorders, we wanted to lend a hand. Hoot is a sleeping disorder detecting application, it keeps track of your nightly vitals and triggers any abnormalities on the app for caretakers to assist. Hoot runs on any Fitbit smartwatch, alongside its mobile platform where you can get a breakdown of your triggers and the abnormalities around you at the point of most discomfort.
Hoot receives data from various sensors such as heart rate, atmospheric pressure and accelerometer readings. These data points are then continuously monitored by our mobile application in order to quickly catch any abnormalities. If hoot sees a combination of data points that it determines is likely to be a seizure, Sleep apnea or any other sleeping discomfort, hoot will notify the caretaker to administer the necessary medication.
## How we built it
hoot is combined of three separate platforms each serving as stand alone services.
Athene - The driver/api which combines the various platforms with various services. Built on a Node JS backend, later converted to a StdLib functional serverless platform. Athene introduces Consistent data stream through web sockets and runs fully serverless.
Strix - The elegant mobile application giving you a breakdown of your sleeping patterns/abnormalities. With a React Native client, strix is in charge of showing you real time data, when you most need it.
Ninox - Fitbit service which combines open messaging between the watch and the other various platforms, sharing data has never been such a breeze. NOTE ( Ninox only works with secure web protocols, StdLib introduced security with serverless functionality between all three platforms. Initially we used ngrok, however latency and elegancy was needed so the switch was made.)
## Challenges we ran into
Web security protocols were requiring certificates but we were able to overcome this with the use of StdLib. We also struggled with the way the FitBit SDK formatted the sensored data.
## Accomplishments that we're proud of
Dealing with the cold, were from Miami /:
## What we learned
Bring more sweater
## What's next for hoot
Muse headband integration!
|
## Inspiration
“Emergency” + “Need” = “EmergeNeed”
Imagine a pleasant warm Autumn evening, and you are all ready to have Thanksgiving dinner with your family. You are having a lovely time, but suddenly you notice a batch of red welts, swollen lips, and itchy throat. Worried and scared, you rush to the hospital just to realize that you will have to wait for another 3 hours to see a doctor due to the excess crowd.
Now imagine that you could quickly talk to a medical professional who could recommend going to urgent care instead to treat your allergic reaction. Or, if you were recommended to seek emergency hospital care, you could see the estimated wait times at different hospitals before you left. Such a system would allow you to get advice from a medical professional quickly, save time waiting for treatment, and decrease your risk of COVID exposure by allowing you to avoid large crowds.
## What it does
Our project aims to address three main areas of healthcare improvement. First, there is no easy way for an individual to know how crowded a hospital will be at a given time. Especially in the current pandemic environment, users would benefit from information such as **crowd level and estimated travel times to different hospitals** near them. Knowing this information would help them avoid unnecessary crowds and the risk of COVID19 exposure and receive faster medical attention and enhanced treatment experience. Additionally, such a system allows hospital staff to operate more effectively and begin triaging earlier since they will receive a heads-up about incoming (non-ambulance) patients before they arrive.
Second, online information is often unreliable, and specific demographics may not have access to a primary care provider to ask for advice during an emergency. Our interface allows users to access **on-call tele-network services specific to their symptoms** easily and therefore receive advice about options such as monitoring at home, urgent care, or an emergency hospital.
Third, not knowing what to expect contributes to the elevated stress levels surrounding an emergency. Having an app service that encourages users to **actively engage in health monitoring** and providing **tips about what to expect** and how to prepare in an emergency will make users better equipped to handle these situations when they occur. Our dashboard offers tools such as a check-in journal to log their mood gratitudes and vent about frustrations. The entries are sent for sentiment analysis to help monitor mental states and offer support. Additionally, the dashboard allows providers to assign goals to patients and monitor progress (for example, taking antibiotics every day for 1 week or not smoking). Furthermore, the user can track upcoming medical appointments and access key medical data quickly (COVID19 vaccination card, immunization forms, health insurance).
## How we built it
Our application consists of a main front end and a backend.
The front end was built using the Bubble.io interface. Within the Bubble service, we set up a database to store user profile information, create emergency events, and accumulate user inputs and goals. The Bubble Design tab and connection to various API’s allowed us to develop different pages to represent the functionalities and tools we needed. For example, we had a user login page, voice recording and symptom input page, emergency event trigger with dynamic map page, and dashboard with journaling and calendar schedule page. The Bubble Workflow tab allowed us to easily connect these pages and communicate information between the front and back end.
The back end was built using Python Flask. We also used Dialogflow to map the symptoms with the doctor's speciality the user should visit. We processed data calls to InterSystems API in the backend server and processed data from the front end. We created synthetic data to test on.
## Challenges we ran into
This project was a great learning experience, and we had a lot of fun (and frustration) working through many challenges. First, we needed to spend time coming up with a project idea and then refining the scope of our idea. To do this, we talked with various sponsors and mentors to get feedback on our proposal and learn about the industry and actual needs of patients. Once we had a good roadmap for what features we wanted, we had to find data that we could use. Currently, hospitals are not required to provide any information about estimated wait time, so we had to find an alternative way to assess this. We decided to address this by developing our own heuristic that considers hospital distance, number of beds, and historic traffic estimation. This is a core functionality of our project, but also the most difficult, and we are still working on optimizing this metric. Another significant challenge we ran into was learning how to use the Bubble service, explicitly setting up the google maps functionality we wanted and connecting the backend with the frontend through Bubbles API. We sought mentor help, and are still trying to debug this step. Another ongoing challenge is implementing the call a doc feature with Twilio API. Finally, our team consists of members from drastically different time zones. So we needed to be proactive about scheduling meetings and communicating progress and tasks.
## Accomplishments that we're proud of
We are proud of our idea - indeed the amount of passion put into developing this toolkit to solve a meaningful problem is something very special (Thank you TreeHacks!).
We are proud of the technical complexity we accomplished in this short time frame. Our project idea seemed very complex, with lots of features we wanted to add.
Collaboration with team mates from different parts of the world and integration of different API’s (Bubble, Google Maps, InterSystems)
## What we learned
We learned a lot about the integration of multiple frameworks. Being a newbie in web development and making an impactful application was one of the things that we are proud of. Most importantly, the research and problem identification were the most exciting part of the whole project. We got to know the possible shortcomings of our present-day healthcare systems and how we can improve them. Coming to the technical part, we learned Bubble, Web Scraping, NLP, integrating with InterSystems API, Dialogflow, Flask.
## What's next for EmergeNeed
We could not fully integrate our backend to our Frontend web application built on Bubble as we faced some technical difficulties at the end that we didn’t expect. The calling feature needs to be implemented fully (currently it just records user audio). We look to make EmergeNeed a full-fledged customer-friendly application. We plan to implement our whole algorithm (ranging from finding hospitals with proper machines and less commute time to integrating real-time speech to text recognition) for large datasets.
|
## Inspiration
According to the National Sleep Foundation, about half of U.S. adult drivers admit to consistently getting behind the wheel while feeling drowsy. About 20% admit to falling asleep behind the wheel at some point in the past year – with more than 40% admitting this has happened at least once in their driving careers. And a study by the AAA Foundation for Traffic Safety estimated that 328,000 drowsy driving crashes occur annually.
Being drowsy behind the wheel makes a person 3x more likely to be involved in a motor vehicle accident, and driving after going more than 20 hours without sleep is the equivalent of driving with a blood-alcohol concentration of 0.08% - the legal limit.
Seeing all of this, we wanted to create a solution that would save lives, and make roads safer.
## What it does
SleepStop is a system that alerts drivers when they are falling asleep, and contacts emergency services in case of accidents.
The first system we developed is an artificial intelligence procedure that detects when a driver is falling asleep. When it detects that the driver has closed their eyes for a prolonged period, it plays a sound that prompts the driver to wake up, and pull over. The warning sounds are inspired by airplane warning sounds, which are specially designed and chosen for emergency situations. We believe the sudden loud sounds are enough to wake a driver up for long enough to allow them to pull over safely.
The second system we developed is a crash detection system. When an accident is detected, authorities are immediately contacted with the location of the crash.
## How we built it
For the sleep detection system, we used opencv to create an artificial intelligence that looks at the driver’s eyes, and times how long the driver’s eyes have been closed for. When the eyes have been closed for long enough to trigger our system, a sound is played to wake the driver up.
For the crash system, by recognizing when airbags are opened, and/or when the driver has closed their eyes for an elongated period of time, we can determine when a crash has occurred. When a crash is detected, emergency services are contacted via Twilio.
## Challenges we ran into
One of the major problems with this project was that the code ran on a raspberry pi, but almost none of our development environments had Linux installed. As a result, we had to be very careful when testing to make sure everything was cross compatible.
We ran into a challenge when creating an executable. We didn’t make it through.
We tried interacting with smart watches in order to send a vibration to the driver on top of the loud sound. Unfortunately, we had to scrap this idea as making custom interactions with a FitBit proved far too challenging.
## Accomplishments that we're proud of
We are proud that our prototype has such advanced functionality, and that we were able to produce it in a small time-frame.
We are also proud to have a working eye detection system, car crash system, sms alerting, and alarm sounding.
The thing we are most proud of is the potential of this hack to save lives.
## What we learned
While working on Sleep Stop, we learned a lot about working as a team. We had to communicate well to make sure that we weren’t writing conflicting code.
We also learned about how to structure large Python projects, using some of Python’s more advanced features.
Another thing that we had to learn to make this project was cross-platform compatibility in Python. Initially, our project would work on Linux but break on Windows.
We learned how to reliably detect facial features such as closed eyes in Python using OpenCV.
## What's next for Sleep Stop
Right now we have a prototype but in the future it would be beneficial to create a highly-integrated product.
We can also envision working with car manufacturers to make sleep-detection a built-in safety feature.
Finally, we believe stimuli other than loud sounds, such as vibrations, are desirable to ensure the driver wakes up.
|
partial
|
# Bugreport
*The first morning moving into a new apartment, I’d been hit with the realization that I was sharing my room with unwanted strangers. I’d forgotten to ask my landlord prior to moving in if there were bugs in the apartment, and I got my answer when I woke up in the morning having received countless unwanted welcoming gifts in the form of bite marks on myself. Realizing the only website that provided a means to report infestations was 20 years old and barely maintained, I decided to take charge and stop others from being in the same egregious situation.* - Nicholas
The project allows potential renters to access reports by other anonymous users on bug infections in their apartment. Users may search by address, and a map view will be displayed on nearby locations with all apartments that have reports of bug infections, the type, as well as the severity. It was built using a combination of Java, Sprint Boot, and Postgres hosted on RDS for the backend, and React for the frontend. Early on during the development of the website, we ran into the question of - how do we stop fake user reviews? The obvious solution against bots would be to use email authentication during user sign-in, but since the website was meant to be community driven, anonymous, and readily available it didn’t align with our vision to require mandatory registration.
Digging through the legality of renting out an apartment with a known infestation, we learned that upon being asked, landlords **must** respond truthfully to questions about bugs. This was our answer to the question then. There was no need to moderate the reviews ourselves. Instead we could have the community decide on the legitimacy of reviews through a large sum of reports. There’s no way to completely filter out all ingenuine reviews, so if there is suspicion that there are bugs, then the go-to answer would be asking the landlord directly. The website should simply serve as a guideline, and a reminder for potential renters to do so.
One big challenge that we faced was in coordinating the use of a bunch of frameworks, a bunch of APIs, and a bunch of libraries together. There were incompatibilities left and right, and a great portion of our time was spent on resolving unhelpful error codes. Next time, even though coding itself is more enjoyable, more focus should have been put into planning out all the details of the projects in the future, so the coding process is much more smooth. In the future, we would like to add much more customization to Bugreport, including a peer review system, and a system to filter by severity.
|
## Inspiration
As a student at university, after first year it is very difficult to find all the information you need to find off campus living, we wanted to solve this problem to help students find the information that they need. There is no central website or collection of information that exists that can aid students with this problem, and thats where we come in!
## What it does
Using a constantly updated database, we track what listings are available near the university for prospective housing options in a certain area. The results are then displayed in a meaningful and simple fashion providing the user with all the information required to make an informed decision, such as: relative location of housing to points of interest (i.e. University, Restaurants, Gyms, etc.), comparing houses by price and size, and providing a price average to give the user a point of reference when looking at house prices.
## How we built it
Using various platforms and different languages, we built our website with many different moving parts. One collects data about available housing from the major landlords of the area and stores it into the data base. The second part takes the data from the database and intereprets it in a meaningful manner. That information is then taken and displayed in a sleek and elegant website which is accessible to the end user.
## Challenges we ran into
Collecting data from websites despite varying HTML/Source Codes, CSS, using our data and applying it best for the consumer, interacting various API's, and gluing it all together.
## Accomplishments that we're proud of
We are proud that we manage to collect 50 property listings which would effectively provide many students with ample choices in where to live, easing the process for them. We are also proud of how well worked together especially since most of our team members come from different Universities and had only met at the hackathon.
## What we learned
We learned integral software design skills which we incorporated in our projects design. We also learned about different types of API's specifically the Google API and how to interact with them.
## What's next for FindLiving.Space
We are going to scale it to incorporate data about listing in more cities to assist students, from other universities, facing similar difficulties finding affordable housings. We would also like to offer features which would benefit the landlords as well, giving them an estimate on their property's value based on the, hopefully, thousands of property listings on the site. We want to create the ideal solution the problem we are trying to solve.
|
## Inspiration
When creating R4R, we took into account factors that might shape one's renting needs, such as accessibility, efficiency, and our personal experiences and frustrations as university students trying to rent off-campus housing. R4R is a platform that is meant to support those needs by providing a faster, more personalized approach to house-hunting.
## What it does
Rentals 4 Real is a platform which automates the rental search process for you. All you have to do is tell us what you are looking for and we will do the rest!
#### Key Features
* Live SMS Alerts - Real time house hunting
+ When new properties are added near you, you are *automatically* notified to you so you can grab good deals in time!
+ Aggregates from several different platforms; a universal solution for renting, subletting and accommodation.
+ Get SMS alerts for properties that you have saved, so you are always on top of the house search!
* Smart recommendations - Find your dream home
+ Our smart recommendation system will find you the best properties based on your preferences.
+ You can also filter by price, location, and more!
+ Save your favourite properties to view later.
* Easy to use UI
+ Our simple and intuitive UI makes it easy to find your dream home.
* Automatic property updates
+ Our database is updated daily, so you will always have the latest listings and will receive an alert when a new one is posted.
## How we built it
#### The Database
We store all the listings and user data in one MongoDB database hosted by MongoDB Atlas. We chose a MongoDB database because of its excellent Geospatial indexing that allows us to search for listings near a user's location.
#### The REST API
We decided to use Python with FastAPI for the REST API backend. FastAPI has features like script type support and documentation generation that were vital for this project. The backend uses Python's MongoDB driver to connect and query the database to prepare data for the frontend to use.
#### The frontend
For the design, we transferred rough sketches from ProCreate onto fully-fleshed Figma designs. Our creative process centred mainly around the accessibility and ease of access to the platform and its features. Furthermore, we adopted a minimalist approach to both the logo and website to increase its appeal and simplicity.
The frontend uses vanilla Javascript (that's right, no React, Vue, or Angular) with bootstrap CSS and HTML for simplicity when designing the UI.
#### The web scraper
The web scraper is built in Node.js using the Puppeteer library and express, and is responsible for finding new listings online by scraping multiple sites in order to find the newest listings. Once a new listing is posted, the Node.js express server finds relevant users from the MongoDB database and alerts them of the new listing using Twilio's SMS API.
## Challenges we ran into
Deploying the site was not easy; there were a lot of errors and "but it works on my machine". The root domain was not working properly for a while and we were almost forced to use a subdomain. Luckily we managed to fix it and it works as planned. Another big challenge was integrating the frontend and backend. The frontend team had different ideas from the backend team and when it was time to integrate them we had to adjust. Finally, the biggest obstacle we ran into was time; we all strongly believe in the potential of the app but we kept ourselves back because we told ourselves "Oh there is no time for this". We were always against the clock.
## Accomplishments that we're proud of
We are proud of our teamwork and how we managed to present a working product despite all the challenges. We are glad to have worked together as a team to create something we believe in. We are also happy to have the domain r4r.tech, which is a very rare domain; it's not only 3 letters but it's also a palindrome.
## What we learned
Participating in this hackathon and developing this project was incredibly rewarding. We used numerous softwares and tools, some of which we may not have heard of before. We were all able to learn from each other and learn from the process of creating a functioning software from scratch. We developed our skills with APIs, learned how to debug servers, and honed our skills with databases. Most of all we learned how to manage our time to effectively complete all necessary tasks in the given time.
## What's next for R4R - Rentals 4 Real
There is much that can be done with this project. For instance, we can expand our databases to accommodate international users by scraping data from global housing websites in addition to Canada. Furthermore, we could implement the ability to "save/favourite" certain houses so that users can receive specific notifications about homes they are interested in.
|
losing
|
## Inspiration
There's something about brief glints in the past that just stop you in your tracks: you dip down, pick up an old DVD of a movie while you're packing, and you're suddenly brought back to the innocent and carefree joy of when you were a kid. It's like comfort food.
So why not leverage this to make money? The ethos of nostalgic elements from everyone's favourite childhood relics turns heads. Nostalgic feelings have been repeatedly found in studies to increase consumer willingness to spend money, boosting brand exposure, conversion, and profit.
## What it does
Large Language Marketing (LLM) is a SaaS built for businesses looking to revamp their digital presence through "throwback"-themed product advertisements.
Tinder x Mean Girls? The Barbie Movie? Adobe x Bob Ross? Apple x Sesame Street? That could be your brand, too. Here's how:
1. You input a product description and target demographic to begin a profile
2. LLM uses the data with the Co:here API to generate a throwback theme and corresponding image descriptions of marketing posts
3. OpenAI prompt engineering generates a more detailed image generation prompt featuring motifs and composition elements
4. DALL-E 3 is fed the finalized image generation prompt and marketing campaign to generate a series of visual social media advertisements
5. The Co:here API generates captions for each advertisement
6. You're taken to a simplistic interface where you can directly view, edit, generate new components for, and publish each social media post, all in one!
7. You publish directly to your business's social media accounts to kick off a new campaign 🥳
## How we built it
* **Frontend**: React, TypeScript, Vite
* **Backend**: Python, Flask, PostgreSQL
* **APIs/services**: OpenAI, DALL-E 3, Co:here, Instagram Graph API
* **Design**: Figma
## Challenges we ran into
* **Prompt engineering**: tuning prompts to get our desired outputs was very, very difficult, where fixing one issue would open up another in a fine game of balance to maximize utility
* **CORS hell**: needing to serve externally-sourced images back and forth between frontend and backend meant fighting a battle with the browser -- we ended up writing a proxy
* **API integration**: with a lot of technologies being incorporated over our frontend, backend, database, data pipeline, and AI services, massive overhead was introduced into getting everything set up and running on everyone's devices -- npm versions, virtual environments, PostgreSQL, the Instagram Graph API (*especially*)...
* **Rate-limiting**: the number of calls we wanted to make versus the number of calls we were allowed was a small tragedy
## Accomplishments that we're proud of
We're really, really proud of integrating a lot of different technologies together in a fully functioning, cohesive manner! This project involved a genuinely technology-rich stack that allowed each one of us to pick up entirely new skills in web app development.
## What we learned
Our team was uniquely well-balanced in that every one of us ended up being able to partake in everything, especially things we hadn't done before, including:
1. DALL-E
2. OpenAI API
3. Co:here API
4. Integrating AI data pipelines into a web app
5. Using PostgreSQL with Flask
6. For our non-frontend-enthusiasts, atomic design and state-heavy UI creation :)
7. Auth0
## What's next for Large Language Marketing
* Optimizing the runtime of image/prompt generation
* Text-to-video output
* Abstraction allowing any user log in to make Instagram Posts
* More social media integration (YouTube, LinkedIn, Twitter, and WeChat support)
* AI-generated timelines for long-lasting campaigns
* AI-based partnership/collaboration suggestions and contact-finding
* UX revamp for collaboration
* Option to add original content alongside AI-generated content in our interface
|
## Inspiration
We are all a fan of Carrot, the app that rewards you for walking around. This gamification and point system of Carrot really contributed to its success. Carrot targeted the sedentary and unhealthy lives we were leading and tried to fix that. So why can't we fix our habit of polluting and creating greenhouse footprints using the same method? That's where Karbon comes in!
## What it does
Karbon gamifies how much you can reduce your CO₂ emissions by in a day. The more you reduce your carbon footprint, the more points you can earn. Users can then redeem these points at Eco-Friendly partners to either get discounts or buy items completely for free.
## How we built it
The app is created using Swift and Swift UI for the user interface. We also used HTML, CSS and Javascript to make a web app that shows the information as well.
## Challenges we ran into
Initially, when coming up with the idea and the economy of the app, we had difficulty modelling how points would be distributed by activity. Additionally, coming up with methods to track CO₂ emissions conveniently became an integral challenge to ensure a clean and effective user interface. As for technicalities, cleaning up the UI was a big issue as a lot of our time went into creating the app as we did not have much experience with the language.
## Accomplishments that we're proud of
Displaying the data using graphs
Implementing animated graphs
## What we learned
* Using animation in Swift
* Making Swift apps
* Making dynamic lists
* Debugging unexpected bugs
## What's next for Karbon
A fully functional Web app along with proper back and forth integration with the app.
|
## Inspiration
It all started on Galentine's Day when our team found ourselves in a frenzy trying to track down a cake recipe we had seen on Instagram. Despite our best efforts, we couldn't locate the post amidst the sea of saved content across platforms like TikTok, Instagram, and Twitter. This experience led us to a realization: the lack of a robust search mechanism for managing our social media bookmarks was a widespread problem.
Bookmarks.ai revolutionizes the way you organize and search through your saved bookmarks across social media platforms.
## What it does
Bookmarks.ai serves as a platform to aggregate your saved posts (bookmarks) throughout your social media platforms and then runs a semantic search on the captions and hashtags of the post to retrieve relevant posts to your query when you are searching for specific content. In addition, it leverages k-means clustering and sentence similarity to group similar posts together in clusters, which are available on the home page for easy access and browsing.
## How we built it
1. We began by customizing the Instagram API which collects post metadata such as the image link, the web link, the caption, and the post ID. From there, we stored this information in the format of a JSON file.
2. These captions were then split using a CharacterTextSplitter, embedded with Langchain, and the vector embeddings were stored using MongoDB Atlas.
3. After creating a vector search index in MongoDB Atlas, we were able to successfully run queries with semantic search and the post details of the relevant posts would be returned as the result.
4. Based on the array of post captions, we were able to run K-Means Clustering using sentence similarity based on a model we found on HuggingFace to divide the posts into clusters. We were able to leverage prompt engineering and an OpenAI LLM to automatically come up with names/descriptions of each cluster (such as "cooking" or "fashion").
5. Our front end was built using React.js, HTML, and CSS
Technologies and Frameworks: React.js, HTML, CSS, Python, LangChain, HuggingFace, MongoDB, OpenAI
## Challenges we ran into
One of the biggest challenges we ran into was Instagram authentication and being able to scrape Instagram-saved posts. Since most social media apps have strict authentication guidelines, accessing embedding URLs for each Instagram post was often difficult. There was also a rate limit to Instagram's API. Additionally, all of our teammates were new to semantic search and k-means clustering so it took us a while to figure out how to create vector embeddings and the vector search index. It was also difficult to integrate the backend with the frontend as our backend would often return JSON files consisting of web links to posts, and we faced difficulty converting them to embedding codes as the Meta Developer Tools API that was used to do that was often unpredictable in generating access tokens.
## Accomplishments that we're proud of
This was our first time exploring semantic search and k-means clustering. We were excited that we were able to build a project with both of these features fully working by the end of the project. We were also excited about the problem we were solving—we had personally faced challenges managing our bookmarks and it was cool to use LLMs in a meaningful way to make that process significantly easier.
## What we learned
This was a great opportunity for all of us to learn about new frameworks and technologies we hadn't used before, become comfortable debugging them, and quickly come up with a backup plan when things weren't going smoothly in the end. Ultimately, we were able to create a product that we believe would be personally useful to us.
## What's next for Bookmarks.ai
We would love to include AI agents that allow you to automatically create action items from posts that you save and query (think creating a grocery list if you view a baking post). We would also love to be able to intelligently process the images in posts and search based on that as sometimes captions and hashtags may not be enough information.
|
winning
|
## Inspiration
We really are passionate about hardware, however many hackers in the community, especially those studying software-focused degrees, miss out on the experience of working on projects involving hardware and experience in vertical integration.
To remedy this, we came up with modware. Modware provides the toolkit for software-focused developers to branch out into hardware and/or to add some verticality to their current software stack with easy to integrate hardware interactions and displays.
## What it does
The modware toolkit is a baseboard that interfaces with different common hardware modules through magnetic power and data connection lines as they are placed onto the baseboard.
Once modules are placed on the board and are detected, the user then has three options with the modules: to create a "wired" connection between an input type module (LCD Screen) and an output type module (knob), to push a POST request to any user-provided URL, or to request a GET request to pull information from any user-provided URL.
These three functionalities together allow a software-focused developer to create their own hardware interactions without ever touching the tedious aspects of hardware (easy hardware prototyping), to use different modules to interact with software applications they have already built (easy hardware interface prototyping), and to use different modules to create a physical representation of events/data from software applications they have already built (easy hardware interface prototyping).
## How we built it
Modware is a very large project with a very big stack: ranging from a fullstack web application with a server and database, to a desktop application performing graph traversal optimization algorithms, all the way down to sending I2C signals and reading analog voltage.
We had to handle the communication protocols between all the levels of modware very carefully. One of the interesting points of communication is using neodymium magnets to conduct power and data for all of the modules to a central microcontroller. Location data is also kept track of as well using a 9-stage voltage divider, a series circuit going through all 9 locations on the modware baseboard.
All of the data gathered at the central microcontroller is then sent to a local database over wifi to be accessed by the desktop application. Here the desktop application uses case analysis to solve the NP-hard problem of creating optimal wire connections, with proper geometry and distance rendering, as new connections are created, destroyed, and modified by the user. The desktop application also handles all of the API communications logic.
The local database is also synced with a database up in the cloud on Heroku, which uses the gathered information to wrap up APIs in order for the modware hardware to be able to communicate with any software that a user may write both in providing data as well as receiving data.
## Challenges we ran into
The neodymium magnets that we used were plated in nickel, a highly conductive material. However magnets will lose their magnetism when exposed to high heat and neodymium magnets are no different. So we had to extremely careful to solder everything correctly on the first try as to not waste the magnetism in our magnets. These magnets also proved very difficult to actually get a solid data and power and voltage reader electricity across due to minute differences in laser cut holes, glue residues, etc. We had to make both hardware and software changes to make sure that the connections behaved ideally.
## Accomplishments that we're proud of
We are proud that we were able to build and integrate such a huge end-to-end project. We also ended up with a fairly robust magnetic interface system by the end of the project, allowing for single or double sized modules of both input and output types to easily interact with the central microcontroller.
## What's next for ModWare
More modules!
|
## Inspiration
We wanted to create a webapp that will help people learn American Sign Language.
## What it does
SignLingo starts by giving the user a phrase to sign. Using the user's webcam, gets the input and decides if the user signed the phrase correctly. If the user signed it correctly, goes on to the next phrase. If the user signed the phrase incorrectly, displays the correct signing video of the word.
## How we built it
We started by downloading and preprocessing a word to ASL video dataset.
We used OpenCV to process the frames from the images and compare the user's input video's frames to the actual signing of the word. We used mediapipe to detect the hand movements and tkinter to build the front-end.
## Challenges we ran into
We definitely had a lot of challenges, from downloading compatible packages, incorporating models, and creating a working front-end to display our model.
## Accomplishments that we're proud of
We are so proud that we actually managed to build and submit something. We couldn't build what we had in mind when we started, but we have a working demo which can serve as the first step towards the goal of this project. We had times where we thought we weren't going to be able to submit anything at all, but we pushed through and now are proud that we didn't give up and have a working template.
## What we learned
While working on our project, we learned a lot of things, ranging from ASL grammar to how to incorporate different models to fit our needs.
## What's next for SignLingo
Right now, SignLingo is far away from what we imagined, so the next step would definitely be to take it to the level we first imagined. This will include making our model be able to detect more phrases to a greater accuracy, and improving the design.
|
## Inspiration
Our inspiration stemmed from the desire to empower the deaf and hard of hearing community by providing them with a more inclusive means of communication. We recognized the importance of American Sign Language (ASL) as a primary mode of communication for many individuals and sought to leverage technology to make ASL more accessible in virtual environments.
## What it does
ASL GestureSense Unity is a groundbreaking project that enables real-time recognition and interpretation of American Sign Language (ASL) gestures in virtual environments. It allows users to interact naturally and intuitively with digital applications using ASL, bridging communication gaps and fostering inclusivity.
## How we built it
ASL GestureSense Unity was developed using Unity, Met Quest 2, and various XR interaction toolkits. Unity provided the foundational environment for our project, allowing us to leverage its powerful rendering capabilities and cross-platform compatibility.
The integration of Met Quest 2 extended our project's capabilities with its advanced hardware features, including high-resolution displays, precise motion tracking, and hand gesture recognition. We harnessed these features to create immersive experiences that closely mimic real-world interactions.
In conjunction with Unity and Met Quest 2, we employed XR interaction toolkits such as the XR Interaction Toolkit and VRTK (Virtual Reality Toolkit). These toolkits enabled us to implement complex interaction mechanics, including hand tracking, gesture recognition, and object manipulation. Additionally, VRTK complemented our development efforts by offering a wide range of tools and utilities designed to streamline the creation of interactive VR applications. With VRTK, we were able to enhance our project with advanced hand gesture recognition capabilities, enabling precise detection and interpretation of gestures performed by users' left and right hands.
We utilized Unity's native scripting API along with custom shaders and physics simulations to achieve lifelike interactions and visual effects.
Throughout the development cycle, we conducted rigorous testing and optimization to ensure optimal performance across different devices and platforms.
## Challenges we ran into
During our journey, we encountered a variety of challenges that tested our problem-solving skills and resilience. One significant hurdle was grappling with compatibility issues between Meta Quest 2 and both mobile phones and laptops. Ensuring seamless interaction across different devices proved to be a daunting task.
Furthermore, we faced numerous build issues, particularly in configuring project settings and managing packages. Unity editor version compatibility also emerged as a persistent issue, requiring careful navigation and troubleshooting to maintain project stability and functionality.
Moreover, addressing boundary loss in Meta Quest 2 controllers presented its own set of challenges, demanding innovative solutions to ensure reliable tracking and user experience in virtual environments.
Despite these obstacles, our team remained dedicated and resourceful, leveraging our collective expertise to overcome each challenge and propel the project forward.
## Accomplishments that we're proud of
One of our proudest achievements is venturing into virtual reality (VR) and Meta Quest 2 for the first time. Despite the novelty and complexity, we swiftly integrated the hardware with our software in a remarkably short period. Navigating VR hardware and establishing seamless communication demanded dedication and perseverance, expanding our technical prowess.
Witnessing our user interface (UI) on the Meta Quest platform was a significant milestone. For VR novices, this integration validated our adaptability and enthusiasm. It fueled further exploration in VR design and development, marking a transformative journey into uncharted technological terrain.
## What we learned
Our project journey underscored the value of embracing new challenges without hesitation. We learned that perseverance and a willingness to explore the unknown are vital for growth. By confronting difficulties head-on, we gained invaluable insights and experiences that transcend technical skills. Above all, we discovered that pushing beyond our comfort zones fosters resilience and adaptability, essential qualities in the dynamic world of technology.
## What's next for ASL GestureSense Unity
Our vision extends beyond gesture recognition. We aim to incorporate subtitle features for hand gestures, enhancing accessibility and learning opportunities. Additionally, we're committed to developing comprehensive tutorials to facilitate the learning of sign language. By combining innovation with education, we aspire to empower individuals and promote inclusivity on a broader scale.
|
winning
|
## Inspiration
Has your browser ever looked like this?

... or this?

Ours have, *all* the time.
Regardless of who you are, you'll often find yourself working in a browser on not just one task but a variety of tasks. Whether its classes, projects, financials, research, personal hobbies -- there are many different, yet predictable, ways in which we open an endless amount of tabs for fear of forgetting a chunk of information that may someday be relevant.
Origin aims to revolutionize your personal browsing experience -- one workspace at a time.
## What it does
In a nutshell, Origin uses state-of-the-art **natural language processing** to identify personalized, smart **workspaces**. Each workspace is centered around a topic comprised of related tabs from your browsing history, and Origin provides your most recently visited tabs pertaining to that workspace and related future ones, a generated **textual summary** of those websites from all their text, and a **fine-tuned ChatBot** trained on data about that topic and ready to answer specific user questions with citations and maintaining history of a conversation. The ChatBot not only answers general factual questions (given its a foundation model), but also answers/recalls specific facts found in the URLs/files that the user visits (e.g. linking to a course syllabus).
Origin also provides a **semantic search** on resources, as well as monitors what URLs other people in an organization visit and recommend pertinent ones to the user via a **recommendation system**.
For example, a college student taking a History class and performing ML research on the side would have sets of tabs that would be related to both topics individually. Through its clustering algorithms, Origin would identify the workspaces of "European History" and "Computer Vision", with a dynamic view of pertinent URLs and widgets like semantic search and a chatbot. Upon continuing to browse in either workspace, the workspace itself is dynamically updated to reflect the most recently visited sites and data.
**Target Audience**: Students to significantly improve the education experience and industry workers to improve productivity.
## How we built it

**Languages**: Python ∙ JavaScript ∙ HTML ∙ CSS
**Frameworks and Tools**: Firebase ∙ React.js ∙ Flask ∙ LangChain ∙ OpenAI ∙ HuggingFace
There are a couple of different key engineering modules that this project can be broken down into.
### 1(a). Ingesting Browser Information and Computing Embeddings
We begin by developing a Chrome Extension that automatically scrapes browsing data in a periodic manner (every 3 days) using the Chrome Developer API. From the information we glean, we extract titles of webpages. Then, the webpage titles are passed into a pre-trained Large Language Model (LLM) from Huggingface, from which latent embeddings are generated and persisted through a Firebase database.
### 1(b). Topical Clustering Algorithms and Automatic Cluster Name Inference
Given the URL embeddings, we run K-Means Clustering to identify key topical/activity-related clusters in browsing data and the associated URLs.
We automatically find a description for each cluster by prompt engineering an OpenAI LLM, specifically by providing it the titles of all webpages in the cluster and requesting it to output a simple title describing that cluster (e.g. "Algorithms Course" or "Machine Learning Research").
### 2. Web/Knowledge Scraping
After pulling the user's URLs from the database, we asynchronously scrape through the text on each webpage via Beautiful Soup. This text provides richer context for each page beyond the title and is temporarily cached for use in later algorithms.
### 3. Text Summarization
We split the incoming text of all the web pages using a CharacterTextSplitter to create smaller documents, and then attempt a summarization in a map reduce fashion over these smaller documents using a LangChain summarization chain that increases the ability to maintain broader context while parallelizing workload.
### 4. Fine Tuning a GPT-3 Based ChatBot
The infrastructure for this was built on a recently-made popular open-source Python package called **LangChain** (see <https://github.com/hwchase17/langchain>), a package with the intention of making it easier to build more powerful Language Models by connecting them to external knowledge sources.
We first deal with data ingestion and chunking, before embedding the vectors using OpenAI Embeddings and storing them in a vector store.
To provide the best chat bot possible, we keep track of a history of a user's conversation and inject it into the chatbot during each user interaction while simultaneously looking up relevant information that can be quickly queries from the vector store. The generated prompt is then put into an OpenAI LLM to interact with the user in a knowledge-aware context.
### 5. Collaborative Filtering-Based Recommendation
Provided that a user does not turn privacy settings on, our collaborative filtering-based recommendation system recommends URLs that other users in the organization have seen that are related to the user's current workspace.
### 6. Flask REST API
We expose all of our LLM capabilities, recommendation system, and other data queries for the frontend through a REST API served by Flask. This provides an easy interface between the external vendors (like LangChain, OpenAI, and HuggingFace), our Firebase database, the browser extension, and our React web app.
### 7. A Fantastic Frontend
Our frontend is built using the React.js framework. We use axios to interact with our backend server and display the relevant information for each workspace.
## Challenges we ran into
1. We had to deal with our K-Means Clustering algorithm outputting changing cluster means over time as new data is ingested, since the URLs that a user visits changes over time. We had to anchor previous data to the new clusters in a smart way and come up with a clever updating algorithm.
2. We had to employ caching of responses from the external LLMs (like OpenAI/LangChain) to operate under the rate limit. This was challenging, as it required revamping our database infrastructure for caching.
3. Enabling the Chrome extension to speak with our backend server was a challenge, as we had to periodically poll the user's browser history and deal with CORS (Cross-Origin Resource Sharing) errors.
4. We worked modularly which was great for parallelization/efficiency, but it slowed us down when integrating things together for e2e testing.
## Accomplishments that we're proud of
The scope of ways in which we were able to utilize Large Language Models to redefine the antiquated browsing experience and provide knowledge centralization.
This idea was a byproduct of our own experiences in college and high school -- we found ourselves spending significant amounts of time attempting to organize tab clutter systematically.
## What we learned
This project was an incredible learning experience for our team as we took on multiple technically complex challenges to reach our ending solution -- something we all thought that we had a potential to use ourselves.
## What's next for Origin
We believe Origin will become even more powerful at scale, since many users/organizations using the product would improve the ChatBot's ability to answer commonly asked questions, and the recommender system would perform better in aiding user's education or productivity experiences.
|
## Inspiration
It takes over 9 months to train a guide dog. The issue is: dogs can't talk (sorry). What if your favorite furry friend could not only guide you, but also talk to you — and understand exactly where you need to go? This was the founding idea behind Rex. We were inspired to create a fantastically helpful pup, who could lead you anywhere you wanted to go. By combining "classic" robotics with cutting edge AI, we were able to bring this dream to reality. We hope, with this, to facilitate the next generation of accessibility support at malls, hospitals, and even hackathons!
## What it does
Rex is a wayfinding bot with agentic functionality. They listen to your request — whatever it may be — and interprets that to be a location in the building you're in. For example, if you told it "I'm feeling hungry", it would lead you to the RBC Oasis tent for some snacks! Along the way, it will look out for obstacles in your path and avoid them if necessary. It focuses highly on accessibility.
## How we built it
**On the software side**, we made extensive use of VoiceFlow and Mappedin. This provided a significant challenge since these services could not efficiently be run on a Raspberry Pi, so we created a mobile-friendly web application that takes care of human-computer interaction (text to speech, speech to text) as well as making API requests.
We used a custom knowledge base on VoiceFlow to customize the agent to be very specific to the different rooms in E7 and point of interests at Hack the North. We used VoiceFlow's Supabase integration to log a request in the database to begin a trip in the database (that the Raspberry Pi picks up on). Through the app, we made calls to VoiceFlow's API to progress the user's conversation with the agent.
For Mappedin, we rendered the map on a React Typescript application and made use of the Wayfinding endpoint to retrieve thorough directions. Not only are these directions displayed on the app, but they are sent to the Supabase database which the Raspberry Pi listens to for entries to trigger certain movements.
Supabase was used to connect the Raspberry Pi, VoiceFlow Agent, and Mappedin wayfinding. We chose Supabase due to its realtime subscription abilities and also its compatibility with all three ends of Rex.
**On the firmware side**, our hardware system involved multiple sensors and actuators such as a Gyroscope, accelerometer, ultra-sonic sensor, and 4 DC motors. Challenged with not having access to a GPS or motor encoders, we used the accelerometer to determine our current location with respect to the origin when navigating through some area, and used the Gyroscope to feed data into our PID controller to ensure we can travel straight and make accurate turns. We also used an ultra-sonic sensor for obstacle avoidance. Once we have received the directions to the requested destination from the database, we use these sensors and actuators to ensure you are able to get there safely.
**On the mechanical side**, the dog-shaped cover, axles, and ears for this project were 3D printed. The cover was designed to resemble a Dachshund, serving as a protective casing for the wheels, MCU, and circuit components. Two axles were fabricated to hold the assembly together, with the lower axle specifically supporting the main microcontroller. One of the main challenges we encountered was the placement of the battery pack inside the dog’s head, which affected the balance. To counteract this, we added another axle vertically, connecting the base to the top of the robot dog to maintain stability.
## Challenges we ran into
We ran into many significant challenges primarily in the realm of interfacing between all the different services and especially the fact that we had to package everything into a Raspberry Pi with real-world movement. We had to leverage a cloud database to communicate with all the different end devices. Moreover, not having a GPS receiver made us look at other solutions to maintain accuracy during movement. Lastly, some APIs didn't have support in certain languages which forced us to be creative with our implementations.
## Accomplishments that we're proud of
We are proud of the way we were able to interconnect all these systems and put it into a compelling form factor. We truly learned a lot about various areas of engineering.
## What's next for Rex
* Using GPS for more accurate positioning
* Capability to traverse different elevations like stairs
* Increasing the capabilities of the agentic system
* Giving the user the ability to upload designs of their own areas directly within the application
* Making Rex look even cooler!
|
## Inspiration
The inspiration for **Insightbot** came from the growing demand for personalized learning and productivity tools that can support users in managing their tasks and goals more effectively. As students, we often found ourselves struggling to balance multiple responsibilities, while searching for tailored information or assistance at critical moments. We realized that there was a need for a tool that not only helps users stay organized with to-do lists and goals tracking but also provides on-demand learning support based on their personal data, like notes or resources. This inspired us to combine the power of **Retrieval-Augmented Generation (RAG)** with a productivity suite to create a seamless and personalized learning experience, accessible anytime.
## What it does
InsightBot is an intelligent study tool powered by **RAG** that delivers a personalized learning experience, helping users achieve their goals. Our platform offers multiple tools that helps users to stay organized, achieve their goals, and receive tailored support to overcome learning challenges, ensuring continuous progress at any time.
## How we built it
Using **NextJS**, we started our project with UI/UX development. Following, we integrated an AI chatbot with **Open AI API key** and downstreamed the chatbot with **RAG** by utilizing **Open AI embeddings**, **Pinecone** database, and **Pinecone API**. Then, with **Python**, we developed backend and handled file upload, to do list, and goal tracker features.
## Challenges we ran into
We encountered an issue integrating **RAG** into the app due to recent updates in OpenAI's embedding documentation. To resolve this, we needed to update the code to align with the latest **OpenAI API** for vector embeddings.
## Accomplishments that we're proud of
We're proud of successfully integrating **RAG** within the chatbot and the goal tracker. The chatbot provides personalized learning assistance based on user-uploaded documents, while the goal tracker provides information on whether the added objectives are SMART (Specific, Measurable, Achievable, Relevant, and Time-bound). Overcoming the challenge of adapting to the updated **OpenAI API** for embeddings was a significant accomplishment. Additionally, we developed a seamless UI/UX offering users an intuitive and productive experience.
## What we learned
Throughout this project, we deepened our understanding of **RAG** and how to effectively integrate it with APIs like **OpenAI** and **Pinecone**. We also learned to adapt quickly to changes in documentation and APIs, improving our problem-solving abilities. On the front-end, we gained valuable experience in creating user-friendly interfaces with **NextJS**, while on the back-end, we sharpened our skills in database management and **API** integration. Additionally, collaboration taught us how to manage our time effectively and prioritize tasks under tight deadlines.
## What's next for Insightbot
We plan to enhance **Insightbot** by allowing user to upload more than one file, and supporting more data types like videos and links. We’re also exploring how to create tasks for the to-do list automatically based on a selected SMART goal using Open AI's API.
|
partial
|
# **Cough It**
#### COVID-19 Diagnosis at Ease
## Inspiration
As the pandemic has nearly crippled all the nations and still in many countries, people are in lockdown, there are many innovations in these two years that came up in order to find an effective way of tackling the issues of COVID-19. Out of all the problems, detecting the COVID-19 strain has been the hardest so far as it is always mutating due to rapid infections.
Just like many others, we started to work on an idea to detect COVID-19 with the help of cough samples generated by the patients. What makes this app useful is its simplicity and scalability as users can record a cough sample and just wait for the results to load and it can give an accurate result of where one have the chances of having COVID-19 or not.
## Objective
The current COVID-19 diagnostic procedures are resource-intensive, expensive and slow. Therefore they are lacking scalability and retarding the efficiency of mass-testing during the pandemic. In many cases even the physical distancing protocol has to be violated in order to collect subject's samples. Disposing off biohazardous samples after diagnosis is also not eco-friendly.
To tackle this, we aim to develop a mobile-based application COVID-19 diagnostic system that:
* provides a fast, safe and user-friendly to detect COVID-19 infection just by providing their cough audio samples
* is accurate enough so that can be scaled-up to cater a large population, thus eliminating dependency on resource-heavy labs
* makes frequent testing and result tracking efficient, inexpensive and free of human error, thus eliminating economical and logistic barriers, and reducing the wokload of medical professionals
Our [proposed CNN](https://dicova2021.github.io/docs/reports/team_Brogrammers_DiCOVA_2021_Challenge_System_Report.pdf) architecture also secured Rank 1 at [DiCOVA](https://dicova2021.github.io/) Challenge 2021, held by IISc Bangalore researchers, amongst 85 teams spread across the globe. With only being trained on small dataset of 1,040 cough samples our model reported:
Accuracy: 94.61%
Sensitivity: 80% (20% false negative rate)
AUC of ROC curve: 87.07% (on blind test set)
## What it does
The working of **Cough It** is simple. User can simply install the app and tap to open it. Then, the app will ask for user permission for external storage and microphone. The user can then just tap the record button and it will take the user to a countdown timer like interface. Playing the play button will simply start recording a 7-seconds clip of cough sample of the user and upon completion it will navigate to the result screen for prediction the chances of the user having COVID-19
## How we built it
Our project is divided into three different modules -->
#### **ML Model**
Our machine learning model ( CNN architecture ) will be trained and deployed using the Sagemaker API which is apart of AWS to predict positive or negative infection from the pre-processed audio samples. The training data will also contain noisy and bad quality audio sample, so that it is robust for practical applications.
#### **Android App**
At first, we prepared the wireframe for the app and decided the architecture of the app which we will be using for our case. Then, we worked from the backend part first, so that we can structure our app in proper android MVVM architecture. We constructed all the models, Retrofit Instances and other necessary modules for code separation.
The android app is built in Kotlin and is following MVVM architecture for scalability. The app uses Media Recorder class to record the cough samples of the patient and store them locally. The saved file is then accessed by the android app and converted to byte array and Base64 encoded which is then sent to the web backend through Retrofit.
#### **Web Backend**
The web backend is actually a Node.js application which is deployed on EC2 instance in AWS. We choose this type of architecture for our backend service because we wanted a more reliable connection between our ML model and our Node.js application.
At first, we created a backend server using Node.js and Express.js and deployed the Node.js server in AWS EC2 instance. The server then receives the audio file in Base64 encoded form from the android client through a POST request API call. After that, the file is getting converted to .wav file through a module in terminal through command. After successfully, generating the .wav file, we put that .wav file as argument in the pre-processor which is a python script. Then we call the AWS Sagemaker API to get the predictions and the Node.js application then sends the predictions back to the android counterpart to the endpoint.
## Challenges we ran into
#### **Android**
Initially, in android, we were facing a lot of issues in recording a cough sample as there are two APIs for recording from the android developers, i.e., MediaRecorder, AudioRecord. As the ML model required a .wav file of the cough sample to pre-process, we had to generate it on-device. It is possible with AudioRecord class but requires heavy customization to work and also, saving a file and writing to that file, is a really tedious and buggy process. So, for android counterpart, we used the MediaRecorder class and saving the file and all that boilerplate code is handled by that MediaRecorder class and then we just access that file and send it to our API endpoint which then converts it into a .wav file for the pre-processor to pre-process.
#### **Web Backend**
In the web backend side, we faced a lot of issues in deploying the ML model and to further communicate with the model with node.js application.
Initially, we deployed the Node.js application in AWS Lamdba, but for processing the audio file, we needed to have a python environment as well, so we could not continue with lambda as it was a Node.js environment. So, to actually get the python environment we had to use AWS EC2 instance for deploying the backend server.
Also, we are processing the audio file, we had to use ffmpeg module for which we had to downgrade from the latest version of numpy library in python to older version.
#### **ML Model**
The most difficult challenge for our ml-model was to get it deployed so that it can be directly accessed from the Node.js server to feed the model with the MFCC values for the prediction. But due to lot of complexity of the Sagemaker API and with its integration with Node.js application this was really a challenge for us. But, at last through a lot of documentation and guidance we are able to deploy the model in Sagemaker and we tested some sample data through Postman also.
## Accomplishments that we're proud of
Through this project, we are proud that we are able to get a real and accurate prediction of a real sample data. We are able to send a successful query to the ML Model that is hosted on Sagemaker and the prediction was accurate.
Also, this made us really happy that in a very small amount we are able to overcome with so much of difficulties and also, we are able to solve them and get the app and web backend running and we are able to set the whole system that we planned for maintaining a proper architecture.
## What we learned
Cough It is really an interesting project to work on. It has so much of potential to be one of the best diagnostic tools for COVID-19 which always keeps us motivated to work on it make it better.
In android, working with APIs like MediaRecorder has always been a difficult position for us, but after doing this project and that too in Kotlin, we feel more confident in making a production quality android app. Also, developing an ML powered app is difficult and we are happy that finally we made it.
In web, we learnt the various scenarios in which EC2 instance can be more reliable than AWS Lambda also running various script files in node.js server is a good lesson to be learnt.
In machine learning, we learnt to deploy the ML model in Sagemaker and after that, how to handle the pre-processing file in various types of environments.
## What's next for Untitled
As of now, our project is more focused on our core idea, i.e., to predict by analysing the sample data of the user. So, our app is limited to only one user, but in future, we have already planned to make a database for user management and to show them report of their daily tests and possibility of COVID-19 on a weekly basis as per diagnosis.
## Final Words
There is a lot of scope for this project and this project and we don't want to stop innovating. We would like to take our idea to more platforms and we might also launch the app in the Play-Store soon when everything will be stable enough for the general public.
Our hopes on this project is high and we will say that, we won't leave this project until perfection.
|
## Inspiration
The RBC challenge pushed us to target the future of the helpdesk. We realized that we could reverse the traditional user to helpdesk to solution pipeline, automating solutions directly to the user with integrations across various services.
## What it does
Cura is an all-in-one solution for companies, automating tasks on their end using insights from their customers, generated across the customer's digital life. As it stands now, Cura is one part back-end automation of processes that would have traditionally required lengthy communication between users and companies, and another part browser extension which intelligently offers access to said automations.
## How we built it
We created a chrome extension using Bootstrap, jQuery, and JavaScript as well as a backend in Flask which manages the application's data. This backend, hosted on the Google Cloud, serves as an API for the front end and develops useful data from a user's activities, pushing them to the user through our chrome extension.
## Challenges we ran into
Every member of our team had to learn various skills from scratch, including how to use Google Cloud, Chrome Extensions, Flask and Flutter. This turned out to be one of the greatest challenges, not only because each team member was building up their knowledge from nothing, but we then had to understand how to seamlessly combine these technologies together.
We pursued various paths that never came to fruition, such as learning how to use Flutter while developing an android app, before realizing that an android app would not cohesively work towards our end goal.
## Accomplishments that we are proud of
We are proud of our ambition and determination when faced with these daunting and complex technologies, as well as the technicalities necessary to pull off the seamless connection of said technologies.
## What's next for Cura
In the future, we imagine Cura to be integrated into the very fabric and infrastructure of the websites themselves, creating a space online that completely eliminates the existence of help desks, saving the consumers time, and the companies money. This would create an internet of preemptive services.
Yesterday's future was an internet of connected things.
Tomorrow's future is an internet of preemptive services that cater to your needs before issues can materialize.
Cura is that future.
## Ivey Challenge
Please consult the Google Drive attachment.
|
## Inspiration
According to a 2015 study in the American Journal of Infection Control, people touch their faces more than 20 times an hour on average. More concerningly, about 44% of the time involves contact with mucous membranes (e.g. eyes, nose, mouth).
With the onset of the COVID-19 pandemic ravaging our population (with more than 300 million current cases according to the WHO), it's vital that we take preventative steps wherever possible to curb the spread of the virus. Health care professionals are urging us to refrain from touching these mucous membranes of ours as these parts of our face essentially act as pathways to the throat and lungs.
## What it does
Our multi-platform application (a python application, and a hardware wearable) acts to make users aware of the frequency they are touching their faces in order for them to consciously avoid doing so in the future. The web app and python script work by detecting whenever the user's hands reach the vicinity of the user's face and tallies the total number of touches over a span of time. It presents the user with their rate of face touches, images of them touching their faces, and compares their rate with a **global average**!
## How we built it
The base of the application (the hands tracking) was built using OpenCV and tkinter to create an intuitive interface for users. The database integration used CockroachDB to persist user login records and their face touching counts. The website was developed in React to showcase our products. The wearable schematic was written up using Fritzing and the code developed on Arduino IDE. By means of a tilt switch, the onboard microcontroller can detect when a user's hand is in an upright position, which typically only occurs when the hand is reaching up to touch the face. The device alerts the wearer via the buzzing of a vibratory motor/buzzer and the flashing of an LED. The emotion detection analysis component was built using the Google Cloud Vision API.
## Challenges we ran into
After deciding to use opencv and deep vision to determine with live footage if a user was touching their face, we came to the unfortunate conclusion that there isn't a lot of high quality trained algorithms for detecting hands, given the variability of what a hand looks like (open, closed, pointed, etc.).
In addition to this, the CockroachDB documentation was out of date/inconsistent which caused the actual implementation to differ from the documentation examples and a lot of debugging.
## Accomplishments that we're proud of
Despite developing on three different OSes we managed to get our application to work on every platform. We are also proud of the multifaceted nature of our product which covers a variety of use cases. Despite being two projects we still managed to finish on time.
To work around the original idea of detecting overlap between hands detected and faces, we opted to detect for eyes visible and determine whether an eye was covered due to hand contact.
## What we learned
We learned how to use CockroachDB and how it differs from other DBMSes we have used in the past, such as MongoDB and MySQL.
We learned about deep vision, how to utilize opencv with python to detect certain elements from a live web camera, and how intricate the process for generating Haar-cascade models are.
## What's next for Hands Off
Our next steps would be to increase the accuracy of Hands Off to account for specific edge cases (ex. touching hair/glasses/etc.) to ensure false touches aren't reported. As well, to make the application more accessible to users, we would want to port the application to a web app so that it is easily accessible to everyone. Our use of CockroachDB will help with scaling in the future. With our newfound familliarity with opencv, we would like to train our own models to have a more precise and accurate deep vision algorithm that is much better suited to our project's goals.
|
winning
|
## Inspiration
Students often have a hard time finding complementary co-founders for their ventures/ideas and have limited interaction with students from other universities. Many universities don't even have entrepreneurship centers to help facilitate the matching of co-founders. Furthermore, it is hard to seek validation from a wide range of perspectives on your ideas when you're immediate network is just your university peers.
## What it does
VenYard is a gamified platform that keeps users engaged and interested in entrepreneurship while building a community where students can search for co-founders across the world based on complementary skill sets and personas. VenYard’s collaboration features also extend to the ideation process feature where students can seek feedback and validation on their ideas from students beyond their university. We want to give the same access to entrepreneurship and venture building to every student across the world so they can have the tools and support to change the world.
## How we built it
We built VenYard using JS, HTML, CSS, Node.js, MySQL, and a lack of sleep!
## Challenges we ran into
We had several database-related issues related to the project submission page and the chat feature on each project dashboard. Furthermore, when clicking on a participant on a project's dashboard, we wanted their profile to be brought up but we ran into database issues there but that is the first problem we hope to fix.
## Accomplishments that we're proud of
For a pair of programmers who have horrible taste in design, we are proud of how this project turned out visually. We are also proud of how we have reached a point in our programming abilities where we are able to turn our ideas into reality!
## What we learned
We were able to advance our knowledge of MySql and Javascript specifically. Aside from that, we were also able to practice pair programming by using the LiveShare extension on VSCode.
## What's next for VenYard
We hope to expand the "Matching" feature by making it so that users can specify more criteria for what they want in the ideal co-founder. Additionally, we probably would have to take a look at the UI and make sure it's user-friendly because there are a few aspects that are still a little clunky. Lastly, the profile search feature needs to be redone because our initial idea of combining search and matching profiles doesn't make sense.
## User Credentials if you do not want to create an account
username: [[email protected]](mailto:[email protected])
password: revant
## Submission Category
Education and Social Good
## Discord Name
revantk16#6733, nicholas#2124
|
## Inspiration
We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution
## What it does
I helps developers find projects to work, and helps project leaders find group members.
By using the data from Github commits, it can determine what kind of projects a person is suitable for.
## How we built it
We decided on building an app for the web, then chose a graphql, react, redux tech stack.
## Challenges we ran into
The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with.
## Accomplishments that we're proud of
We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show.
## What we learned
We learned that using APIs can be challenging in that they give unique challenges.
## What's next for Hackr\_matchr
Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
|
## Inspiration
**Substandard** food at a **5-star** rated restaurants? We feel you! Yelp or TripAdvisor do not tell the whole story - the ratings you see on these websites say very little about how particular dishes at a restaurant taste like. Therefore, we are here to satisfy your cravings for REAL good dishes.
## What it does
Our app collects user ratings based on specific dishes rather than vague, general experiences about a restaurant. Therefore, our recommendations tell you exactly where GOOD DISHES are. It also allows you to subscribe to interest groups based on your favorite dishes and ensures that you do not miss out on good restaurants and good reviewers.
## How we built it
We developed an Android application with Google Firebase API.
## Challenges we ran into
Learning new stuff in such a short time.
## Accomplishments that we're proud of
We're gonna have a working app!!
## What we learned
UI/UX Design, Frontend Development, Firebase API
## What's next for Culinect
Continuing development especially in regards to supporting communities and location support
|
winning
|
## Inspiration
Our inspiration for AiTC came from our use of various forms of transportation and our close connections with friends in the aviation industry. Seeing firsthand the complexities and challenges that air traffic controllers face, we wanted to create a solution that could alleviate some of the burdens on agents in control towers. AiTC is designed to streamline communication, reduce errors, and enhance the efficiency of air traffic management.
## What it does
AiTC is an AI-driven platform designed to assist air traffic controllers by automating routine radio communication with pilots and providing real-time flight data insights. It leverages advanced speech recognition and natural language processing to analyze ATC-pilot communications, flag potential issues, and ensure that critical information is delivered accurately and on time. The system works in tandem with controllers, acting as a digital assistant to help manage complex airspace efficiently. The long-term goal is to fully automate the ATC communications.
## How we built it
We built AiTC using several powerful tools and technologies. We used Vapi to train the models with out datasets and for real-time flight data integration, providing up-to-the-minute information about flights. We used Deepgram for speech-to-text capabilities, converting real-time ATC communications into actionable data. We used OpenAI to to interpret and assist with communication, as well as to improve decision-making processes within the control tower. We used hugging face datasets of ATC call transcripts and guides to train the AI models ensuring accurate communication processing.
## Accomplishments that we're proud of
We’re super proud of developing a working prototype that integrates real-time flight data with AI-driven communication tools. The ability of AiTC to accurately process and respond to ATC communications is a major milestone, as is its potential to enhance safety and efficiency in one of the most critical sectors of transportation. We’re also proud of how we were able to incorporate machine learning models into a real-time system without sacrificing performance.
## What we learned
Through this project, we learned the importance of handling real-time data effectively. We also gained valuable experience in the integration of various APIs and the unique challenges of real-time communication systems.
## What's next for AI Traffic Control (AITC)
The next step for AiTC is to improve its scalability and robustness. We plan to expand its ability to handle more complex airspaces, integrate additional datasets for more nuanced decision-making, and further reduce latency in communication. The long-term goal is to fully automate this communication system. We also aim to pilot the system with actual air traffic control teams to gather real-world feedback and refine the tool for broader adoption.
|
## Inspiration
We wanted to solve a unique problem we felt was impacting many people but was not receiving enough attention. With emerging and developing technology, we implemented neural network models to recognize objects and images, and converting them to an auditory output.
## What it does
XTS takes an **X** and turns it **T**o **S**peech.
## How we built it
We used PyTorch, Torchvision, and OpenCV using Python. This allowed us to utilize pre-trained convolutional neural network models and region-based convolutional neural network models without investing too much time into training an accurate model, as we had limited time to build this program.
## Challenges we ran into
While attempting to run the Python code, the video rendering and text-to-speech were out of sync and the frame-by-frame object recognition was limited in speed by our system's graphics processing and machine-learning model implementing capabilities. We also faced an issue while trying to use our computer's GPU for faster video rendering, which led to long periods of frustration trying to solve this issue due to backwards incompatibilities between module versions.
## Accomplishments that we're proud of
We are so proud that we were able to implement neural networks as well as implement object detection using Python. We were also happy to be able to test our program with various images and video recordings, and get an accurate output. Lastly we were able to create a sleek user-interface that would be able to integrate our program.
## What we learned
We learned how neural networks function and how to augment the machine learning model including dataset creation. We also learned object detection using Python.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
partial
|
## Inspiration
We all feel strongly about a wide variety of issues, but often fail to support organizations fighting for causes close to us. Our inspiration for this project was this feeling: we wanted to find a way to encourage donating to nonprofits in the easiest way. What better way to do this than through something we use everyday-- email.
## What it does
Project #doSomething is an email integration for a "/donate" command on Mixmax (an email enhancing tool). We then offer a variety of nonprofits to donate to, including the Best Friends Animal Society, various animal shelters, the ACLU, LGBTQ+ Initiative, ect.
## How we built it
We built this Mixmax integration with primarily JavaScript (Node.js), HTML/CSS, and lots of coffee.
## Challenges we ran into
We ran into a major issue where we found out that Chrome was blocking the integration, because we didn't have a valid security certificate on our local dev environment. We managed to eventually work around it by modifying Chrome. Another major challenge we ran into was discovering which nonprofits to add, but we quickly found a ton of worthy organizations.
## Accomplishments that we're proud of
We're proud of making a working email integration (with Mixmax) that can be used by anyone. We really hope people can use this to donate, and easily encourage their friends to donate.
## What's next for Project #doSomething
Stretch goal for the future: make it easy to match donations (and track how many donations to match) via a Mixmax integration.
|
## Inspiration
**Read something, do something.** We constantly encounter articles about social and political problems affecting communities all over the world – mass incarceration, the climate emergency, attacks on women's reproductive rights, and countless others. Many people are concerned or outraged by reading about these problems, but don't know how to directly take action to reduce harm and fight for systemic change. **We want to connect users to events, organizations, and communities so they may take action on the issues they care about, based on the articles they are viewing**
## What it does
The Act Now Chrome extension analyzes articles the user is reading. If the article is relevant to a social or political issue or cause, it will display a banner linking the user to an opportunity to directly take action or connect with an organization working to address the issue. For example, someone reading an article about the climate crisis might be prompted with a link to information about the Sunrise Movement's efforts to fight for political action to address the emergency. Someone reading about laws restricting women's reproductive rights might be linked to opportunities to volunteer for Planned Parenthood.
## How we built it
We built the Chrome extension by using a background.js and content.js file to dynamically render a ReactJS app onto any webpage the API identified to contain topics of interest. We built the REST API back end in Python using Django and Django REST Framework. Our API is hosted on Heroku, the chrome app is published in "developer mode" on the chrome app store and consumes this API. We used bitbucket to collaborate with one another and held meetings every 2 - 3 hours to reconvene and engage in discourse about progress or challenges we encountered to keep our team productive.
## Challenges we ran into
Our initial attempts to use sophisticated NLP methods to measure the relevance of an article to a given organization or opportunity for action were not very successful. A simpler method based on keywords turned out to be much more accurate.
Passing messages to the back-end REST API from the Chrome extension was somewhat tedious as well, especially because the API had to be consumed before the react app was initialized. This resulted in the use of chrome's messaging system and numerous javascript promises.
## Accomplishments that we're proud of
In just one weekend, **we've prototyped a versatile platform that could help motivate and connect thousands of people to take action toward positive social change**. We hope that by connecting people to relevant communities and organizations, based off their viewing of various social topics, that the anxiety, outrage or even mere preoccupation cultivated by such readings may manifest into productive action and encourage people to be better allies and advocates for communities experiencing harm and oppression.
## What we learned
Although some of us had basic experience with Django and building simple Chrome extensions, this project provided new challenges and technologies to learn for all of us. Integrating the Django backend with Heroku and the ReactJS frontend was challenging, along with writing a versatile web scraper to extract article content from any site.
## What's next for Act Now
We plan to create a web interface where organizations and communities can post events, meetups, and actions to our database so that they may be suggested to Act Now users. This update will not only make our applicaiton more dynamic but will further stimulate connection by introducing a completely new group of people to the application: the event hosters. This update would also include spatial and temporal information thus making it easier for users to connect with local organizations and communities.
|
## Inspiration
In the past three months, there have been over 60 current events that have resulted in over 200 deaths and have affected over 2 million people. From natural disasters such as Hurricane Dorian and the wildfires burning in the Amazon to events such as the El Paso shooting that prompt discussions around social issues, these events shook individuals, communities, and nations.
While the opportunities to donate to these causes are endless, there is a disconnect between wanting to donate and following through with the action. In fact, from a survey we collected with 64 responses, 81% of respondents have thought about or wanted to donate to a cause in the last three months, but only 27% actually did. Despite all the different charities and organizations that provide disaster relief and funding for social causes, respondents cite “confusion,” “too many options”, “trust” and “too lazy to do own research” as reasons for why they haven’t donated. These reasons are precisely why we chose to create ++Giving.
## What it does
++Giving is a platform that empowers individuals to donate to causes that speak to them, bridging the gap between people who wish to donate and the organizations that can help. On the ++Giving web application, a user is presented with recently occurring natural disasters and social issues that they can donate to, a description for each event, and where the event is located on a map. ++Giving decreases the amount of time that individuals have to spend doing their own research on what to give to and how. Through the web application, users can also follow our donate link to organization sites, allowing the user to give money to the cause.
## How I built it
++Giving is hosted in an Azure Web Application built with a .NET Core 2.1 framework, and a user interface implemented with HTML and CSS. Our front end application allows people to view the world's trending issues, learn about these issues through the top trending news articles, and directly link to ways to donate to these causes while staying all within our application. The data storage for our application is configured in four data tables in an Azure SQL Database. In order to keep our databases up to date with issues that people care about, we use Time Triggered Azure functions to continually update our databases. These Serverless Functions rely on APIs like NASA's EONET API to obtain all currently occurring natural disasters, Google-News to obtain trending political and social issues, and Charity Navigator to match the right charities to these issues to empower our users to give to the causes that speak to them.
## Challenges I ran into **and**
## Accomplishments that I'm proud of **and**
## What I learned
We are really proud that we all did something we haven’t done before at this hackathon, whether it was coding in a new language, using an API we weren’t familiar with, or learning new concepts. In addition, we are glad that we took the time to work together and take feedback from one another during a planning stage. It made sure that everyone was on the same page and had a mutual understanding of what we were trying to accomplish and build. However, we were challenged by data flow because we were dealing with so many APIs. We learned that it’s important to communicate, work as a team, and appreciate all viewpoints. Taking advantage of team members’ different skill sets and ideas helps move the project forward.
## What's next for ++Giving
During this hackathon we were able to implement the core functionality of our idea: making giving to charities more accessible. Moving forward, we have a couple of ideas to further improve our platform. Firstly, we would like to implement direct payment functionality, so that a user can donate to their select charity without needing to leave our
site. Next, we would like to implement a round-up system, to further incentive our user base to donate to important causes. When linked to a debit card, our site would give the option to "round up" purchases, similar to the investment platform "Acorns". If one of our users spends $13.43 at Trader Joe's, we will send them a notification asking if they would like toround up that purchase to a flat $14, and invest the $.57 to one of their favorited charities.
|
partial
|
## Motivation
Our motivation was a grand piano that has sat in our project lab at SFU for the past 2 years. The piano was Richard Kwok's grandfathers friend and was being converted into a piano scroll playing piano. We had an excessive amount of piano scrolls that were acting as door stops and we wanted to hear these songs from the early 20th century. We decided to pursue a method to digitally convert the piano scrolls into a digital copy of the song.
The system scrolls through the entire piano scroll and uses openCV to convert the scroll markings to individual notes. The array of notes are converted in near real time to an MIDI file that can be played once complete.
## Technology
The scrolling through the piano scroll utilized a DC motor control by arduino via an H-Bridge that was wrapped around a Microsoft water bottle. While the notes were recorded using openCV via a Raspberry Pi 3, which was programmed in python. The result was a matrix representing each frame of notes from the Raspberry Pi camera. This array was exported to an MIDI file that could then be played.
## Challenges we ran into
The openCV required a calibration method to assure accurate image recognition.
The external environment lighting conditions added extra complexity in the image recognition process.
The lack of musical background in the members and the necessity to decrypt the piano scroll for the appropriate note keys was an additional challenge.
The image recognition of the notes had to be dynamic for different orientations due to variable camera positions.
## Accomplishments that we're proud of
The device works and plays back the digitized music.
The design process was very fluid with minimal set backs.
The back-end processes were very well-designed with minimal fluids.
Richard won best use of a sponsor technology in a technical pickup line.
## What we learned
We learned how piano scrolls where designed and how they were written based off desired tempo of the musician.
Beginner musical knowledge relating to notes, keys and pitches. We learned about using OpenCV for image processing, and honed our Python skills while scripting the controller for our hack.
As we chose to do a hardware hack, we also learned about the applied use of circuit design, h-bridges (L293D chip), power management, autoCAD tools and rapid prototyping, friction reduction through bearings, and the importance of sheave alignment in belt-drive-like systems. We also were exposed to a variety of sensors for encoding, including laser emitters, infrared pickups, and light sensors, as well as PWM and GPIO control via an embedded system.
The environment allowed us to network with and get lots of feedback from sponsors - many were interested to hear about our piano project and wanted to weigh in with advice.
## What's next for Piano Men
Live playback of the system
|
## **Inspiration:**
Our inspiration stemmed from the realization that the pinnacle of innovation occurs at the intersection of deep curiosity and an expansive space to explore one's imagination. Recognizing the barriers faced by learners—particularly their inability to gain real-time, personalized, and contextualized support—we envisioned a solution that would empower anyone, anywhere to seamlessly pursue their inherent curiosity and desire to learn.
## **What it does:**
Our platform is a revolutionary step forward in the realm of AI-assisted learning. It integrates advanced AI technologies with intuitive human-computer interactions to enhance the context a generative AI model can work within. By analyzing screen content—be it text, graphics, or diagrams—and amalgamating it with the user's audio explanation, our platform grasps a nuanced understanding of the user's specific pain points. Imagine a learner pointing at a perplexing diagram while voicing out their doubts; our system swiftly responds by offering immediate clarifications, both verbally and with on-screen annotations.
## **How we built it**:
We architected a Flask-based backend, creating RESTful APIs to seamlessly interface with user input and machine learning models. Integration of Google's Speech-to-Text enabled the transcription of users' learning preferences, and the incorporation of the Mathpix API facilitated image content extraction. Harnessing the prowess of the GPT-4 model, we've been able to produce contextually rich textual and audio feedback based on captured screen content and stored user data. For frontend fluidity, audio responses were encoded into base64 format, ensuring efficient playback without unnecessary re-renders.
## **Challenges we ran into**:
Scaling the model to accommodate diverse learning scenarios, especially in the broad fields of maths and chemistry, was a notable challenge. Ensuring the accuracy of content extraction and effectively translating that into meaningful AI feedback required meticulous fine-tuning.
## **Accomplishments that we're proud of**:
Successfully building a digital platform that not only deciphers image and audio content but also produces high utility, real-time feedback stands out as a paramount achievement. This platform has the potential to revolutionize how learners interact with digital content, breaking down barriers of confusion in real-time. One of the aspects of our implementation that separates us from other approaches is that we allow the user to perform ICL (In Context Learning), a feature that not many large language models don't allow the user to do seamlessly.
## **What we learned**:
We learned the immense value of integrating multiple AI technologies for a holistic user experience. The project also reinforced the importance of continuous feedback loops in learning and the transformative potential of merging generative AI models with real-time user input.
|
## Inspiration
We wanted to create something like HackerTyper where you can fake being a hacker. Instead, we created *Harmony Hacker*, such that you can fake being a professional piano player with confidence.
## What it does
The piano keyboard in our project plays the notes of the MIDI song you select through its speaker, but the twist is that no matter which key you press; the pressed key will only trigger the **correct** note from the song. A touchscreen HDMI LCD that includes a switch function which changes from "hacker" mode to "normal piano" mode; a "song list" with a list of triggers that we input in; a reset function that replays the song; a "quit" function to stop the code; and a "slowness" bar to change to song speed. An ultrasonic sensor has been integrated with the song-playing system which changes the song volume depending on the distance of the player's body to the keyboard.
## How we built it
Our project consisted of:
* MIDI keyboard
* Raspberry Pi 4
* 7" touchscreen HDMI LCD by SunFounder
* Breadboard
* Jumper wires
* Resistors
* Ultrasonic sensor
* 5V 3A Power Supply
Initially, we block the local input from the keyboard by changing the local MIDI signal to off. Next, we detect the midi signals through the raspberry pi. By reading a MIDI song, we interject the user input and play the actual song (if hacker mode is turned on) by passing our output on the MIDI input of the piano. The ultrasonic sensor is used to change the volume of the piano: the closer you are, the louder it becomes.
## Challenges we ran into
Some major challenges we ran into were:
* deciding on a project idea
* gathering all the hardware components that we wanted
* handling stray notes and combined key presses using "chunking"
* make the ultrasonic sensor measurements accurate
## Accomplishments that we're proud of
* learned how to interface with a MIDI keyboard with Python over USB
* a minimum viable project built well under the deadline
* distance-based volume addon.
## What we learned
The technologies our team learned were tkinder and the mido Python library (for MIDI parsing). Furthermore, we learned to interface with the ultrasonic sensor through the raspberry pi and receive its input signals in the code.
## What's next for Harmony Hacker
We would like to add an addressable RGB LED light strip to show the user which note on the keyboard will be played next. It would also be great if we added a frequency sensor to change the colour of some RGB LED lights in a strip and their speed.
|
partial
|
## Inspiration
We wanted to create a fun and easy way for students to sell items on campus.
## What it does
Pusheen Sell allows users to browse items currently for sale and sell items of their own. Not only is the app quick and easy to use, but it also gives more information about the items on sale through short videos.
## How we built it
The app is built using Xcode 8 and Swift 3, and the backend is handled via Firebase.
## Challenges we ran into
We started out with a much more ambitious project than we could complete in 24 hours, so we had to make some hard decisions and cut features in order to make a working app.
## Accomplishments that we're proud of
We're happy that we were able to get the video processing working.
## What we learned
Proper git management is important, even with a small team in the same place.
## What's next for Pusheen Sell
We would like to add a few more features that we couldn't complete in 24 hours, and eventually get the app in hands of students to start user testing. Eventually, this would be a great app to be used on campuses across the world!
|
## Inspiration
As college students, we didn't know anything, so we thought about how we can change that. One way was by being smarter about the way we take care of our unused items. We all felt that our unused items could be used in better ways through sharing with other students on campus. All of us shared our items on campus with our friends but we felt that there could be better ways to do this. However, we were truly inspired after one of our team members, and close friend, Harish, an Ecological Biology major, informed us about the sheer magnitude of trash and pollution in the oceans and the surrounding environments. Also, as the National Ocean Science Bowl Champion, Harish truly was able to educate the rest of the team on how areas such as the Great Pacific Garbage Patch affect the wildlife and oceanic ecosystems, and the effects we face on a daily basis from this. With our passions for technology, we wanted to work on an impactful project that caters to a true need for sharing that many of us have while focusing on maintaining sustainability.
## What it does
The application essentially works to allow users to list various products that they want to share with the community and allows users to request items. If one user sees a request they want to provide a tool for or an offer they find appealing, they’ll start a chat with the user through the app to request the tool. Furthermore, the app sorts and filters by location to make it convenient for users. Also, by allowing for community building through the chat messaging, we want to use
## How we built it
We first, focused on wireframing and coming up with ideas. We utilized brainstorming sessions to come up with unique ideas and then split our team based on our different skill sets. Our front-end team worked on coming up with wireframes and creating designs using Figma. Our backend team worked on a whiteboard, coming up with the system design of our application server, and together the front-end and back-end teams worked on coming up with the schemas for the database.
We utilized the MERN technical stack in order to build this. Our front-end uses ReactJS in order to build the web app, our back-end utilizes ExpressJS and NodeJS, while our database utilizes MongoDB.
We also took plenty of advice and notes, not only from mentors throughout the competition, but also our fellow hackers. We really went around trying to ask for others’ advice on our web app and our final product to truly flush out the best product that we could. We had a customer-centric mindset and approach throughout the full creation process, and we really wanted to look and make sure that what we are building has a true need and is truly wanted by the people. Taking advice from these various sources helped us frame our product and come up with features.
## Challenges we ran into
Integration challenges were some of the toughest for us. Making sure that the backend and frontend can communicate well was really tough, so what we did to minimize the difficulties. We designed the schemas for our databases and worked well with each other to make sure that we were all on the same page for our schemas. Thus, working together really helped to make sure that we were making sure to be truly efficient.
## Accomplishments that we're proud of
We’re really proud of our user interface of the product. We spent quite a lot of time working on the design (through Figma) before creating it in React, so we really wanted to make sure that the product that we are showing is visually appealing.
Furthermore, our backend is also something we are extremely proud of. Our backend system has many unconventional design choices (like for example passing common ids throughout the systems) in order to avoid more costly backend operations. Overall, latency and cost and ease of use for our frontend team was a big consideration when designing the backend system
## What we learned
We learned new technical skills and new soft skills. Overall in our technical skills, our team became much stronger with using the MERN frameworks. Our front-end team learned so many new skills and components through React and our back-end team learned so much about Express. Overall, we also learned quite a lot about working as a team and integrating the front end with the back-end, improving our software engineering skills
The soft skills that we learned about are how we should be presenting a product idea and product implementation. We worked quite a lot on our video and our final presentation to the judges and after speaking with hackers and mentors alike, we were able to use the collective wisdom that we gained in order to really feel that we created a video that shows truly our interest in designing important products with true social impact. Overall, we felt that we were able to convey our passion for building social impact and sustainability products.
## What's next for SustainaSwap
We’re looking to deploy the app in local communities as we’re at the point of deployment currently. We know there exists a clear demand for this in college towns, so we’ll first be starting off at our local campus of Philadelphia. Also, after speaking with many Harvard and MIT students on campus, we feel that Cambridge will also benefit, so we will shortly launch in the Boston/Cambridge area.
We will be looking to expand to other college towns and use this to help to work on the scalability of the product. We ideally, also want to push for the ideas of sustainability, so we would want to potentially use the platform (if it grows large enough) to host fundraisers and fundraising activities to give back in order to fight climate change.
We essentially want to expand city by city, community by community, because this app also focuses quite a lot on community and we want to build a community-centric platform. We want this platform to just build tight-knit communities within cities that can connect people with their neighbors while also promoting sustainability.
|
## The Vision
Nowadays, a lot of universities, including McGill, have facebook groups aimed at exchange and sale of second hand goods from student-to-student.
Our vision was to create a mobile e-commerce platform that streamline this process, making it easy to bid, add, and connect sellers / buyers, alongside a 'Best-Offer' system to allow for some form of pseudo-auction.
We also, similarly to Uber, only display a buyer's rating when they make an offer, but not their identity. We wait until a seller confirms a potential buyer as the 'Confirmed option' to connect both parties, through their mobile phones (there's a direct link to calling each other in their respective sections of the app). Upon success completion of a purchase or exchange, both parties are asked to rank each other based on their experience, to be displayed when they sell or buy other goods on the platform.
## The App
Fully coded using Angular 2, and the Ionic 2 (beta) framework has allowed us to create native applications which run on iOS and Android, and are even able to use hardware functionalities (i.e. taking pictures in-app).
This has also allowed us to design a beautiful looking app which adheres to both Apple's Human Interface Guidelines and Google's Material design (respectively), with a maximum of code cohesion (around 95% of the code is shared amongst both platforms).
We opted to use FireBase as a BaaS for this app, due to its extensive JS API and ability to create two-way 'live' data-binding between the app and the backend, making this app essentially a real-time marketplace.
## The Future
Things we wish we had time to do:
1. Implement Smooch API to collect Seller and Buyer data before connecting them through a live chat system.
2. Add some form of e-payment system (would be particularity useful for event tickets)
3. Finish implementing the user rating system.
|
partial
|
## Inspiration
With an ever-increasing rate of crime, and internet deception on the rise, Cyber fraud has become one of the premier methods of theft across the world. From frivolous scams like phishing attempts, to the occasional Nigerian prince who wants to give you his fortune, it's all too susceptible for the common person to fall in the hands of an online predator. With this project, I attempted to amend this situation, beginning by focusing on the aspect of document verification and credentialization.
## What does it do?
SignRecord is an advanced platform hosted on the Ethereum Inter-Planetary File System (an advanced peer-to-peer hyper media protocol, built with the intentions of making the web faster, safer, and more open). Connected with secure DocuSign REST API's, and the power of smart contracts to store data, SignRecord acts as an open-sourced wide-spread ledger of public information, and the average user's information. By allowing individuals to host their data, media, and credentials on the ledger, they are given the safety and security of having a proven blockchain verify their identity, protecting them from not only identity fraud but also from potential wrongdoers.
## How I built it
SignRecord is a responsive web app backed with the robust power of both NodeJS and the Hyperledger. With authentication handled by MongoDB, routing by Express, front-end through a combination of React and Pug, and asynchronous requests through Promise it offers a fool-proof solution.
Not only that, but I've also built and incorporated my own external API, so that other fellow developers can easily integrate my platform directly into their applications.
## Challenges I ran into
The real question should be what Challenge didn't I run into. From simple mistakes like missing a semi-colon, to significant headaches figuring out deprecated dependencies and packages, this development was nothing short of a roller coaster.
## Accomplishments that I'm proud of
Of all of the things that I'm proud of, my usage of the Ethereum Blockchain, DocuSign API's, and the collective UI/UX of my application stand out as the most significant achievements I made in this short 36-hour period. I'm especially proud, that I was able to accomplish what I could, alone.
## What I learned
Like any good project, I learnt more than I could have imagined. From learning how to use advanced MetaMask libraries to building my very own API, this journey was nothing short of a race with hurdles at every mark.
## What's next for SignRecord
With the support of fantastic mentors, a great hacking community, and the fantastic sponsors, I hope to be able to continue expanding my platform in the near future.
|
## **1st Place!**
## Inspiration
Sign language is a universal language which allows many individuals to exercise their intellect through common communication. Many people around the world suffer from hearing loss and from mutism that needs sign language to communicate. Even those who do not experience these conditions may still require the use of sign language for certain circumstances. We plan to expand our company to be known worldwide to fill the lack of a virtual sign language learning tool that is accessible to everyone, everywhere, for free.
## What it does
Here at SignSpeak, we create an encouraging learning environment that provides computer vision sign language tests to track progression and to perfect sign language skills. The UI is built around simplicity and useability. We have provided a teaching system that works by engaging the user in lessons, then partaking in a progression test. The lessons will include the material that will be tested during the lesson quiz. Once the user has completed the lesson, they will be redirected to the quiz which can result in either a failure or success. Consequently, successfully completing the quiz will congratulate the user and direct them to the proceeding lesson, however, failure will result in the user having to retake the lesson. The user will retake the lesson until they are successful during the quiz to proceed to the following lesson.
## How we built it
We built SignSpeak on react with next.js. For our sign recognition, we used TensorFlow with a media pipe model to detect points on the hand which were then compared with preassigned gestures.
## Challenges we ran into
We ran into multiple roadblocks mainly regarding our misunderstandings of Next.js.
## Accomplishments that we're proud of
We are proud that we managed to come up with so many ideas in such little time.
## What we learned
Throughout the event, we participated in many workshops and created many connections. We engaged in many conversations that involved certain bugs and issues that others were having and learned from their experience using javascript and react. Additionally, throughout the workshops, we learned about the blockchain and entrepreneurship connections to coding for the overall benefit of the hackathon.
## What's next for SignSpeak
SignSpeak is seeking to continue services for teaching people to learn sign language. For future use, we plan to implement a suggestion box for our users to communicate with us about problems with our program so that we can work quickly to fix them. Additionally, we will collaborate and improve companies that cater to phone audible navigation for blind people.
|
## Inspiration
Every year thousands of companies are compromised and the authentication information for many is stolen. The consequence of such breaches is immense and damages the trust between individuals and organizations. There is significant overhead for an organization to secure it's authentication methods, often usability is sacrificed. Users must trust organizations with their info and organizations must trust that their methods of storage are secure. We believe this presents a significant trust and usability problem. What if we could leverage the blockchain, to do this authentication trustlessly between parties? Using challenge and response we'd be able to avoid passwords completely. Furthermore, this system of permissions could be extended from the digital world to physical assets, i.e. giving somebody the privilege to unlock your door.
## What it does
Entities can assign and manage privileges for resources they possess by publishing that a certain user (with an associated public key) has access to a resource on the ethereum blockchain (this can be temporary or perpetual). During authentication, entities validate that users hold the private keys to their associated public keys using challenge and response. A user needs only to keep his private key and remember his username.
## How we built it
We designed and deployed a smart contract on the Ropsten Ethereum testnet to trustlessly manage permissions. Users submit transactions and read from this contract as a final authority for access control. An android app is used to showcase real-life challenge and response and how it can be used to validate privileges trustfully between devices. A web app is also developed to show the ease of setup for an individual user. AWS Lambda is used to query the blockchain through trusted apis, this may be adjusted by any user to their desired confidence level. A physical lock with an NFC reader was to be used to showcase privilege transfer, but the NFC reader was broken.
## Challenges we ran into
The NFC reader we used was broken so we were unable to demonstrate one potential application. Since Solidity (Ethereum EVM language) is relatively new there was not an abundance of documentation available when we ran into issues sending and validating transactions, although we eventually fixed these issues.
## Accomplishments that we're proud of
Trustless authentication on the blockchain, IoT integration, Ethereum transactions greatly simplified for users (they need not know how it works), and Login with username
## What we learned
We learned a lot about the quirks of Ethereum and developing around it. Solidity still has a long way to go regarding developer documentation. The latency of ethereum transactions, scalability of ethereum, and transaction fees on the network present limiting factors towards future adoption, though we have demonstrated that such a trustless authentication scheme using the blockchain is indeed secure and easy to use.
## What's next for Keychain
Use a different chain with faster transaction times and lower fees, or even rolling our own chain using optimized for keychain. More digital and IoT demos demonstrating ease of use.
|
winning
|
## Inspiration
IoT devices are extremely useful; however, they come at a high price. A key example of this is a smart fridge, which can cost thousands of dollars. Although many people can't afford this type of luxury, they can still greatly benefit from it. A smart fridge can eliminate food waste by keeping an inventory of your food and its freshness. If you don't know what to do with leftover food, a smart fridge can suggest recipes that use what you have in your fridge. This can easily expand to guiding your food consumption and shopping choices.
## What it does
FridgeSight offers a cheap, practical solution for those not ready to invest in a smart fridge. It can mount on any existing fridge as a touch interface and camera. By logging what you put in, take out, and use from your fridge, FridgeSight can deliver the very same benefits that smart fridges provide. It scans barcodes of packaged products and classifies produce and other unprocessed foods. FridgeSight's companion mobile app displays your food inventory, gives shopping suggestions based on your past behavior, and offers recipes that utilize what you currently have.
## How we built it
The IoT device is powered by Android Things with a Raspberry Pi 3. A camera and touchscreen display serve as peripherals for the user. FridgeSight scans UPC barcodes in front of it with the Google Mobile Vision API and cross references them with the UPCItemdb API in order to get the product's name and image. It also can classify produce and other unpackaged products with the Google Cloud Vision API. From there, the IoT device uploads this data to its Hasura backend.
FridgeSight's mobile app is built with Expo and React Native, allowing it to dynamically display information from Hasura. Besides using the data to display inventory and log absences, it pulls from the Food2Fork API in order to suggest recipes. Together, the IoT device and mobile app have the capability to exceed the functionality of a modern smart fridge.
## Challenges we ran into
Android Things provides a flexible environment for an IoT device. However, we had difficulty with initial configuration. At the very start, we had to reflash the device with an older OS because the latest version wasn't able to connect to WiFi networks. Our setup would also experience power issues, where the camera took too much power and shut down the entire system. In order to avoid this, we had to convert from video streaming to repeated image captures. In general, there was little documentation on communicating with the Raspberry Pi camera.
## Accomplishments that we're proud of
Concurring with Android Things's philosophy, we are proud of giving accessibility to previously unaffordable IoT devices. We're also proud of integrating a multitude of APIs across different fields in order to solve this issue.
## What we learned
This was our first time programming with Android Things, Expo, Hasura, and Google Cloud - platforms that we are excited to use in the future.
## What's next for FridgeSight
We've only scratched the surface for what the FridgeSight technology is capable of. Our current system, without any hardware modifications, can notify you when food is about to expire or hasn't been touched recently. Based on your activity, it can conveniently analyze your diet and provide healthier eating suggestions. FridgeSight can also be used for cabinets and other kitchen inventories. In the future, a large FridgeSight community would be able to push the platform with crowd-trained neural networks, easily surpassing standalone IoT kitchenware. There is a lot of potential in FridgeSight, and we hope to use PennApps as a way forward.
|
## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
|
## Inspiration
Almost 2.2 million tonnes of edible food is discarded each year in Canada alone, resulting in over 17 billion dollars in waste. A significant portion of this is due to the simple fact that keeping track of expiry dates for the wide range of groceries we buy is, put simply, a huge task. While brainstorming ideas to automate the management of these expiry dates, discussion came to the increasingly outdated usage of barcodes for Universal Product Codes (UPCs); when the largest QR codes can store [thousands of characters](https://stackoverflow.com/questions/12764334/qr-code-max-char-length), why use so much space for a 12 digit number?
By building upon existing standards and the everyday technology in our pockets, we're proud to present **poBop**: our answer to food waste in homes.
## What it does
Users are able to scan the barcodes on their products, and enter the expiration date written on the packaging. This information is securely stored under their account, which keeps track of what the user has in their pantry. When products have expired, the app triggers a notification. As a proof of concept, we have also made several QR codes which show how the same UPC codes can be encoded alongside expiration dates in a similar amount of space, simplifying this scanning process.
In addition to this expiration date tracking, the app is also able to recommend recipes based on what is currently in the user's pantry. In the event that no recipes are possible with the provided list, it will instead recommend recipes in which has the least number of missing ingredients.
## How we built it
The UI was made with native android, with the exception of the scanning view which made use of the [code scanner library](https://github.com/yuriy-budiyev/code-scanner).
Storage and hashing/authentication were taken care of by [MongoDB](https://www.mongodb.com/) and [Bcrypt](https://github.com/pyca/bcrypt/) respectively.
Finally in regards to food, we used [Buycott](https://www.buycott.com/)'s API for UPC lookup and [Spoonacular](https://spoonacular.com/) to look for valid recipes.
## Challenges we ran into
As there is no official API or publicly accessible database for UPCs, we had to test and compare multiple APIs before determining Buycott had the best support for Canadian markets. This caused some issues, as a lot of processing was needed to connect product information between the two food APIs.
Additionally, our decision to completely segregate the Flask server and apps occasionally resulted in delays when prioritizing which endpoints should be written first, how they should be structured etc.
## Accomplishments that we're proud of
We're very proud that we were able to use technology to do something about an issue that bothers not only everyone on our team on a frequent basis (something something cooking hard) but also something that has large scale impacts on the macro scale. Some of our members were also very new to working with REST APIs, but coded well nonetheless.
## What we learned
Flask is a very easy to use Python library for creating endpoints and working with REST APIs. We learned to work with endpoints and web requests using Java/Android. We also learned how to use local databases on Android applications.
## What's next for poBop
We thought of a number of features that unfortunately didn't make the final cut. Things like tracking nutrition of the items you have stored and consumed. By tracking nutrition information, it could also act as a nutrient planner for the day. The application may have also included a shopping list feature, where you can quickly add items you are missing from a recipe or use it to help track nutrition for your upcoming weeks.
We were also hoping to allow the user to add more details to the items they are storing, such as notes for what they plan on using it for or the quantity of items.
Some smaller features that didn't quite make it included getting a notification when the expiry date is getting close, and data sharing for people sharing a household. We were also thinking about creating a web application as well, so that it would be more widely available.
Finally, we strongly encourage you, if you are able and willing, to consider donating to food waste reduction initiatives and hunger elimination charities.
One of the many ways to get started can be found here:
<https://rescuefood.ca/>
<https://secondharvest.ca/>
<https://www.cityharvest.org/>
# Love,
# FSq x ANMOL
|
winning
|
# FireNet: Wildfire Risk Forecaster
Try our web-based risk forecasting tool [here](https://kkraoj.users.earthengine.app/view/wildfire-danger).
## Why forecast wildfire risk?
Climate change is exacerbating wildfire likelihood and severity in the western USA. For instance, in the state of California, USA, in the last century, 12 of the largest wildfires, 13 of the most destructive wildfires, and 7 of the deadliest wildfires occurred during the last decade. Wildfires emit massive amounts of harmful particulate matter posing a direct threat to humans. The severity of wildfire's effects combined with the probability of them worsening in the future due to climate change prompts an urgent need to understand, estimate, and forecast wildfire risk. Our FireNet, a deep-learning-powered system, helps improve the forecasting of wildfire risk by the following "PR. S^3" solutions:
1. P--Prevent smoke inhalation : Wildfire smoke is known to cause serious respiratory disorders. Wildfire exposure ratings can help better forecast the severity and probability of wildfire smoke.
2. R--Reallocate disaster relief resources : By knowing when and where fires are likely to occur, fire managers can better allocate their resources to be well-prepared when wildfires do occur.
3. S--Save lives : High resolution maps of wildfire risk, if prepared periodically, can help evacuate people in advance of an occurrence of wildfire.
4. S--Sustainability : Wildfires not only cause damages to human society, but also decrease species diversity and increase greenhouse gas emissions.
5. S--Social Equality: People from different backgrounds, such as wealth, urban proximity, race, and ethnicity, are unequally exposed to wildfire risk. Poor communities and black, Hispanic or Native American experience greater vulnerability to wildfires compared with other communities. We are dedicated to eliminate the "unnatural" humanity crisis and consequences of natural disasters, such as wildfires.
## What does our tool do?
**FireNet is a rapid risk forecasting tool to characterize the three fundamental components of risk--hazard, exposure, and vulnerability--by combining high resolution remote sensing imagery with deep learning.** The system integrates microwave remote sensing (Sentinel-1) and optical remote sensing (Landsat-8) information over 3 months to produce accurate estimates of fuel conditions (cross-validated r-squared = 0.63) exceeding previous methods by a wide margin (r-squared = 0.3). Moreover, by linking the Long Short Term Memory (LSTM) outputs for fuel conditions with data on human settlements and population density, FireNet indicates the aggregate risk imposed by wildfires on humans. FireNet is hosted as a Google Earth Engine App.
## Wait, but what is wildfire risk exactly?
Wildfire risk depends on 3 fundamental quantities - hazard, exposure, and vulnerability. It can be assessed by combining the extent of its exposure (e.g., how likely is a fire), severity of the hazard (e.g., how big a fire can occur) and the magnitude of its effects (e.g., how much property could be destroyed). Assessing wildfire risk presents several challenges due to uncertainty in fuel flammability and ignition potential. Due to complex physiological mechanisms, estimating fuel flammability and ignition potential has not been possible on landscape scales. Current wildfire risk estimation methods are simply insufficient to accurately and efficiently estimate wildfire risk because of the lack of direct flammability-associated inputs.
## Is FireNet better than other risk estimation methods out there?
**Absolutely!** FireNet is the first wildfire risk forecasting system that operates at 250 m spatial resolution. The best wildfire danger models like the National Fire Danger Rating System operate at approximate 25 Kms. This was possible because of the system's unique use of microwave as well as optical remote sensing. Moreover, existing systems have lesser accuracy in predicting fuel flammability (r-squared = 0.3) than our system (r-squared = 0.63).
## This sounds like yet another deep learning fad without any explainability?
FireNet is fully transparent and explainable. We anticipated the need for explainability for FireNet and thus did NOT use deep learning to estimate fire risk. The deep learning model (LSTM) merely estimates fuel flammability (how wet or dry the forests are) using supervised learning on remote sensing. The flammability estimates are then combined with fuel availability (observed) and urban proximity (census data) to produce a transparent estimate.
## I am sold! Where can I checkout FireNet?
[Here](https://kkraoj.users.earthengine.app/view/wildfire-danger)
## What did we learn from this project?
1. Training a deep learning model is child's play compared to the amount of work required in data cleaning.
2. Data cleaning is child's play compared to the amount of time needed to design an app with UX in mind
|
## Inspiration
The increasing frequency and severity of natural disasters such as wildfires, floods, and hurricanes have created a pressing need for reliable, real-time information. Families, NGOs, emergency first responders, and government agencies often struggle to access trustworthy updates quickly, leading to delays in response and aid. Inspired by the need to streamline and verify information during crises, we developed Disasteraid.ai to provide concise, accurate, and timely updates.
## What it does
Disasteraid.ai is an AI-powered platform consolidating trustworthy live updates about ongoing crises and packages them into summarized info-bites. Users can ask specific questions about crises like the New Mexico Wildfires and Floods to gain detailed insights. The platform also features an interactive map with pin drops indicating the precise coordinates of events, enhancing situational awareness for families, NGOs, emergency first responders, and government agencies.
## How we built it
1. Data Collection: We queried You.com to gather URLs and data on the latest developments concerning specific crises.
2. Information Extraction: We extracted critical information from these sources and combined it with data gathered through Retrieval-Augmented Generation (RAG).
3. AI Processing: The compiled information was input into Anthropic AI's Claude 3.5 model.
4. Output Generation: The AI model produced concise summaries and answers to user queries, alongside generating pin drops on the map to indicate event locations.
## Challenges we ran into
1. Data Verification: Ensuring the accuracy and trustworthiness of the data collected from multiple sources was a significant challenge.
2. Real-Time Processing: Developing a system capable of processing and summarizing information in real-time requires sophisticated algorithms and infrastructure.
3. User Interface: Creating an intuitive and user-friendly interface that allows users to easily access and interpret information presented by the platform.
## Accomplishments that we're proud of
1. Accurate Summarization: Successfully integrating AI to produce reliable and concise summaries of complex crisis situations.
2. Interactive Mapping: Developing a dynamic map feature that provides real-time location data, enhancing the usability and utility of the platform.
3. Broad Utility: Creating a versatile tool that serves diverse user groups, from families seeking safety information to emergency responders coordinating relief efforts.
## What we learned
1. Importance of Reliable Data: The critical need for accurate, real-time data in disaster management and the complexities involved in verifying information from various sources.
2. AI Capabilities: The potential and limitations of AI in processing and summarizing vast amounts of information quickly and accurately.
3. User Needs: Insights into the specific needs of different user groups during a crisis, allowing us to tailor our platform to better serve these needs.
## What's next for DisasterAid.ai
1. Enhanced Data Sources: Expanding our data sources to include more real-time feeds and integrating social media analytics for even faster updates.
2. Advanced AI Models: Continuously improving our AI models to enhance the accuracy and depth of our summaries and responses.
3. User Feedback Integration: Implementing feedback loops to gather user input and refine the platform's functionality and user interface.
4. Partnerships: Building partnerships with more emergency services and NGOs to broaden the reach and impact of Disasteraid.ai.
5. Scalability: Scaling our infrastructure to handle larger volumes of data and more simultaneous users during large-scale crises.
|
## Inspiration
More than 4.5 million acres of land have burned on the West Coast in the past month alone
Experts say fires will worsen in the years to come as climate change spikes temperatures and disrupts precipitation patterns
Thousands of families have been and will continue to be displaced by these disasters
## What it does
When a wildfire strikes, knowing where there are safe places to go can bring much-needed calm in times of peril
Mildfire is a tool designed to identify higher-risk areas with deep learning analysis of satellite data to keep people and their families out of danger
Users can place pins at locations of themselves or people in distress
Users can mark locations of fires in real time
Deep learning-based treetop detection to indicate areas higher-risk of forest fire
Heatmap shows safe and dangerous zones, and can facilitate smarter decision making
## How I built it
User makes a GET request w/ latitude/longitude value, which is then handled in real time, hosted on Google Cloud Functions
The request triggers a function that grabs satellite data in adjacent tiles from Google Maps Static API
Detects trees w/ RGB data from satellite imagery using deep-learning neural networks trained on existing tree canopy and vegetation data (“DeepForest”, Weinstein, et al. 2019)
Generates a heat map from longitude/latitude, flammability radius, confidence from ML model
Maps public pins, Broadcasts distress and First-Responder notifications in real-time
Simple, dynamic Web-interface
## Challenges I ran into
Completely scrapped mobile app halfway through the hack and had to change to web app.
## Accomplishments that I'm proud of
Used a lottt of GCP and learned a lot about it. Also almost finished the web app despite starting on it so late. ML model is also very accurate and useful.
## What I learned
A lot of GCP and ML and Flutter. Very fun experience overall!
## What's next for Mildfire
Finish the mobile and web app
|
partial
|
## Inspiration
In a world saturated with information and misinformation, the upcoming 2024 presidential election stands as a testament to our need for clarity and truth. This is where Kaleido steps in. Inspired by the vision of bringing light to the unseen, our mission goes beyond merely summarizing news content. We dive deeper, revealing not just what you're reading, but also what you're missing out on. In addition, by assessing bias and sentiment across related articles, Kaleido provides a fuller, more balanced view of every story.
Our approach is grounded in the belief that understanding the full spectrum of information is crucial in navigating the complexities of today's world. Kaleido is not just a tool; it's a movement towards informed, critical thinking and a beacon for those who seek to understand beyond the surface.
## What Kaleido Does
Kaleido is a Chrome extension designed to seamlessly integrate into your browsing and reading experience. Kaleido works in the background as you explore news articles, leveraging vector embedding search technology to analyze the content of your current article. After identifying similar articles discovered by other users through embeddings, Kaleido offers a unique comparative analysis of the article at hand.
The core functionality of Kaleido is twofold:
1. **Comparative Analysis**: Kaleido enables users to compare the bias and sentiment of the current article with a wide array of other similar articles identified by its vector embedding search. This feature allows for an in-depth understanding of where the article stands, in bias and sentiment, within a broader spectrum of perspectives and analyses.
2. **Idea Aggregation and Analysis**: The second key feature distills the essence of an article into multiple points or ideas, and each point is then embedded in vector space. This process constructs a vast network or a "superset" of ideas shared across articles, identified as clusters of similar thoughts in the vector-embedded space. Through this approach of embedding points across articles, Kaleido surfaces significant, overarching takeaways from these groups of articles, offering a comprehensive view that goes beyond the surface level.
Additionally, Kaleido aggregates crucial insights by comparing the focal points of an article against others, enriching the user's understanding with enhanced data on embeddings, sentiment, and bias. This not only broadens the perspective of readers but also deepens their engagement with content, fostering a more informed and critical approach to information consumption.
## Why We Built Kaleido
Since our team's convergence at TreeHacks, the driving force behind Kaleido has been a deep-seated interest in education and politics, with a spotlight on the impending 2024 election. This particular moment in time underscores the paramount importance of clarity, truth, and context in media consumption. Our decision to bring Kaleido to life stemmed from a desire to make a meaningful impact within the media space, guided by the principle that the quality of input fundamentally shapes the quality of output.
In crafting Kaleido, our ambition was to elevate the caliber and context of the information we consume. We recognized that in an era where information is both weapon and tool, enhancing the input quality—by providing a more nuanced, comprehensive understanding of the news—can significantly influence our perceptions, decisions, and, ultimately, our societal output. Our hope with Kaleido was to optimize this input process, fostering a more informed, discerning, and engaged populace, especially in the context of critical events like elections, where the stakes are incredibly high.
## How We Built Kaleido
From the start, we knew that building Kaleido would be a complex adventure with a lot of moving parts. In fact, the project ended up being particularly complicated, especially with the employment of multiple embeddings and clustering. However, although Kaleido embeddings seem particularly scary, we were very lucky to have sponsored technology that significantly streamlined our development process.
Unified Database Solution: Perhaps one of the most significant achievements in our development process was the integration of a vector database that allowed us to consolidate our data storage needs into a single, versatile database, provided by InterSystems IRIS Database. This eliminated the necessity for multiple databases, streamlining our data management and enhancing the efficiency of our operations.
InterSystem's multi-paradigm database capabilities allowed us to not only store vector information efficiently, but also to leverage SQL for our database needs. This dual use case was crucial for managing the vector data for our embedding and clustering functionalities. The ability for us to conduct vector embedding search and store data in the same database was an advantage whose importance could not be understated.
Front-End and Back-End Synergy: The architectural foundation of Kaleido was a harmonious blend of front-end and back-end technologies. Our Chrome extension, developed using Plasma and React, was intricately designed to offer a user-friendly interface and a responsive user experience. The backend, powered by Bun, facilitated a REST API in a monorepo setup, enabling seamless type sharing with the front-end and ensuring a cohesive development environment.
API Development: To facilitate seamless communication between our Chrome extension and the backend, we opted for Bun with Elysia. Elysia is a highly ergonomic web framework, with an API like express, for building backend servers, but specifically designed to run in the Bun runtime.
While Bun, unfortunately, does not fully support a Turborepo monorepo setup, we actually were able to still use Bun for everything else (because Bun is both a package manager and runtime). By using pnpm as a package manager, we realized we could still use Bun as a runtime. This choice proved instrumental in developing a high-performance, type-safe API. Elysia, being a drop-in replacement for Express, offered us the ease of building our API with enhanced efficiency and reliability, ensuring our backend infrastructure was robust and scalable.
## Challenges we ran into
From our initial ideation, we realized quickly that our application was nowhere near as simple as we initially thought and we brainstormed a lot to think and address these complexities before coding.
One significant hurdle was the process of acquiring and scraping articles, a task made increasingly difficult by the recent rise of anti-scraping measures implemented by many websites.
However, the true test of our resolve was the spontaneous decision to incorporate clustering into Kaleido. This idea, born from the brainstorming sessions aimed at addressing our project's growing complexity, introduced a new level of challenge since nobody on our team had any familiarity with how to do it. Achieving proficiency required not just technical acumen but a willingness to dive into uncharted waters.
## Accomplishments that we're proud of
Similar to the previous sections about the challenges we ran into, we're quite proud of the challenges that we have overcome. Particularly, during our brainstorming session, we went from a half-baked idea with very little development into one which had many moving parts and complexity, and all thanks to a few hours of brainstorming that we set aside at the very beginning of the weekend. In addition, we are particularly happy about how we had spontaneously stumbled upon the idea of clustering—until then, nobody in our group had done it before, but it just felt like the right word describing what we wanted to do. Finally, we're very happy with how the logo and the user interface turned out. It turned out to be far better than we ever would have imagined.
## What we learned
The importance of brainstorming and thinking things through before implementation. Although our team was formed relatively late, we were able to avoid a lot of bumps by taking an hour at the beginning to think and brainstorm and concretely write down our implementations before coding them.
Additionally, we really enjoyed engaging with new technologies and learning how to cluster!
## Conclusion
Developing Kaleido was a pleasure, and our team really enjoyed working together, learning new technologies, and making new friends.
Now, we hope you will join us in our quest to illuminate the unseen, enhance your reading experience, and equip you with the insights needed to face the misinformation challenge head-on. Together, let's create a more informed society ready to make educated decisions for the future.
|
## Inspiration
The three of us believe that our worldview comes from what we read. Online news articles serve to be that engine, and for something so crucial as learning about current events, an all-encompassing worldview is not so accessible. Those new to politics and just entering the discourse may perceive an extreme partisan view on a breaking news to be the party's general take; On the flip side, those with entrenched radicalized views miss out on having productive conversations. Information is meant to be shared, perspectives from journals, big, and small, should be heard.
## What it does
WorldView is a Google Chrome extension that activates whenever someone is on a news article. The extension describes the overall sentiment of the article, describes "clusters" of other articles discussing the topic of interest, and provides a summary of each article. A similarity/dissimilarity score is displayed between pairs of articles so readers can read content with a different focus.
## How we built it
Development was broken into three components: scraping, NLP processing + API, and chrome extension development. Scraping involved using Selenium, BS4, DiffBot (API that scrapes text from websites and sanitizes), and Google Cloud Platform's Custom Search API to extract similar documents from the web. NLP processing involved using NLTK, KProtoype clustering algorithm. Chrome extension was built with React, which talked to a Flask API. Flask server is hosted on an AWS EC2 instance.
## Challenges we ran into
Scraping: Getting enough documents that match the original article was a challenge because of the rate limiting of the GCP API. NLP Processing: one challenge here was determining metrics for clustering a batch of documents. Sentiment scores + top keywords were used, but more robust metrics could have been developed for more accurate clusters. Chrome extension: Figuring out the layout of the graph representing clusters was difficult, as the library used required an unusual way of stating coordinates and edge links. Flask API: One challenge in the API construction was figuring out relative imports.
## Accomplishments that we're proud of
Scraping: Recursively discovering similar documents based on repeatedly searching up headline of an original article. NLP Processing: Able to quickly get a similarity matrix for a set of documents.
## What we learned
Learned a lot about data wrangling and shaping for front-end and backend scraping.
## What's next for WorldView
Explore possibility of letting those unable to bypass paywalls of various publishers to still get insights on perspectives.
|
# butternut
## `buh·tr·nuht` -- `bot or not?`
Is what you're reading online written by a human, or AI? Do the facts hold up? `butternut`is a chrome extension that leverages state-of-the-art text generation models *to combat* state-of-the-art text generation.
## Inspiration
Misinformation spreads like wildfire in these days and it is only aggravated by AI-generated text and articles. We wanted to help fight back.
## What it does
Butternut is a chrome extension that analyzes text to determine just how likely a given article is AI-generated.
## How to install
1. Clone this repository.
2. Open your Chrome Extensions
3. Drag the `src` folder into the extensions page.
## Usage
1. Open a webpage or a news article you are interested in.
2. Select a piece of text you are interested in.
3. Navigate to the Butternut extension and click on it.
3.1 The text should be auto copied into the input area.
(you could also manually copy and paste text there)
3.2 Click on "Analyze".
4. After a brief delay, the result will show up.
5. Click on "More Details" for further analysis and breakdown of the text.
6. "Search More Articles" will do a quick google search of the pasted text.
## How it works
Butternut is built off the GLTR paper <https://arxiv.org/abs/1906.04043>. It takes any text input and then finds out what a text generating model *would've* predicted at each word/token. This array of every single possible prediction and their related probability is crossreferenced with the input text to determine the 'rank' of each token in the text: where on the list of possible predictions was the token in the text.
Text with consistently high ranks are more likely to be AI-generated because current AI-generated text models all work by selecting words/tokens that have the highest probability given the words before it. On the other hand, human-written text tends to have more variety.
Here are some screenshots of butternut in action with some different texts. Green highlighting means predictable while yellow and red mean unlikely and more unlikely, respectively.
Example of human-generated text:

Example of GPT text:

This was all wrapped up in a simple Flask API for use in a chrome extension.
For more details on how GLTR works please check out their paper. It's a good read. <https://arxiv.org/abs/1906.04043>
## Tech Stack Choices
Two backends are defined in the [butternut backend repo](https://github.com/btrnt/butternut_backend). The salesforce CTRL model is used for butternut.
1. GPT-2: GPT-2 is a well-known general purpose text generation model and is included in the GLTR team's [demo repo](https://github.com/HendrikStrobelt/detecting-fake-text)
2. Salesforce CTRL: [Salesforce CTRL](https://github.com/salesforce/ctrl) (1.6 billion parameter) is bigger than all GPT-2 varients (117 million - 1.5 billion parameters) and is purpose-built for data generation. A custom backend was
CTRL was selected for this project because it is trained on an especially large dataset meaning that it has a larger knowledge base to draw from to discriminate between AI and human -written texts. This, combined with its greater complexity, enables butternut to stay a step ahead of AI text generators.
## Design Decisions
* Used approchable soft colours to create a warm approach towards news and data
* Used colour legend to assist users in interpreting language
## Challenges we ran into
* Deciding how to best represent the data
* How to design a good interface that *invites* people to fact check instead of being scared of it
* How to best calculate the overall score given a tricky rank distrubution
## Accomplishments that we're proud of
* Making stuff accessible: implementing a paper in such a way to make it useful **in under 24 hours!**
## What we learned
* Using CTRL
* How simple it is to make an API with Flask
* How to make a chrome extension
* Lots about NLP!
## What's next?
Butternut may be extended to improve on it's fact-checking abilities
* Text sentiment analysis for fact checking
* Updated backends with more powerful text prediction models
* Perspective analysis & showing other perspectives on the same topic
Made with care by:

```
// our team:
{
'group_member_0': [brian chen](https://github.com/ihasdapie),
'group_member_1': [trung bui](https://github.com/imqt),
'group_member_2': [vivian wi](https://github.com/vvnwu),
'group_member_3': [hans sy](https://github.com/hanssy130)
}
```
Github links:
[butternut frontend](https://github.com/btrnt/butternut)
[butternut backend](https://github.com/btrnt/butternut_backend)
|
partial
|
## Inspiration
We wanted to pioneer the use of computationally intensive image processing and machine learning algorithms for use in low resource robotic or embedded devices by leveraging cloud computing.
## What it does
CloudChaser (or "Chase" for short) allows the user to input custom objects for chase to track. To do this Chase will first rotate counter-clockwise until the object comes into the field of view of the front-facing camera, then it will continue in the direction of the object, continually updating its orientation.
## How we built it
"Chase" was built with four continuous rotation servo motors mounted onto our custom modeled 3D-printed chassis. Chase's front facing camera was built using a raspberry pi 3 camera mounted onto a custom 3D-printed camera mount. The four motors and the camera are controlled by the raspberry pi 3B which streams video to and receives driving instructions from our cloud GPU server through TCP sockets. We interpret this cloud data using YOLO (our object recognition library) which is connected through another TCP socket to our cloud-based parser script, which interprets the data and tells the robot which direction to move.
## Challenges we ran into
The first challenge we ran into was designing the layout and model for the robot chassis. Because the print for the chassis was going to take 12 hours, we had to make sure we had the perfect dimensions on the very first try, so we took calipers to the motors, dug through the data sheets, and made test mounts to ensure we nailed the print.
The next challenge was setting up the TCP socket connections and developing our software such that it could accept data from multiple different sources in real time. We ended up solving the connection timing issue by using a service called cam2web to stream the webcam to a URL instead of through TCP, allowing us to not have to queue up the data on our server.
The biggest challenge however by far was dealing with the camera latency. We wanted to camera to be as close to live as possible so we took all possible processing to be on the cloud and none on the pi, but since the rasbian operating system would frequently context switch away from our video stream, we still got frequent lag spikes. We ended up solving this problem by decreasing the priority of our driving script relative to the video stream on the pi.
## Accomplishments that we're proud of
We're proud of the fact that we were able to model and design a robot that is relatively sturdy in such a short time. We're also really proud of the fact that we were able to interface the Amazon Alexa skill with the cloud server, as nobody on our team had done an Alexa skill before. However, by far, the accomplishment that we are the most proud of is the fact that our video stream latency from the raspberry pi to the cloud is low enough that we can reliably navigate the robot with that data.
## What we learned
Through working on the project, our team learned how to write an skill for Amazon Alexa, how to design and model a robot to fit specific hardware, and how to program and optimize a socket application for multiple incoming connections in real time with minimal latency.
## What's next for CloudChaser
In the future we would ideally like for Chase to be able to compress a higher quality video stream and have separate PWM drivers for the servo motors to enable higher precision turning. We also want to try to make Chase aware of his position in a 3D environment and track his distance away from objects, allowing him to "tail" objects instead of just chase them.
## CloudChaser in the news!
<https://medium.com/penn-engineering/object-seeking-robot-wins-pennapps-xvii-469adb756fad>
<https://penntechreview.com/read/cloudchaser>
|
## Inspiration
Minecraft has an interesting map mechanic where your character holds a map which "draws itself" while exploring the world. I am also very interested in building a plotter, which is a printer that uses a pen and (XY) gantry to produce images. These ideas seemed to fit together quite well.
## What it does
Press a button, copy GPS coordinates and run the custom "gcode" compiler to generate machine/motor driving code for the arduino. Wait around 15 minutes for a 48 x 48 output.
## How we built it
Mechanical assembly - Tore apart 3 dvd drives and extracted a multitude of components, including sled motors (linear rails). Unfortunately, they used limit switch + DC motor rather than stepper, so I had to saw apart the enclosure and **glue** in my own steppers with a gear which (you guessed it) was also glued to the motor shaft.
Electronics - I designed a simple algorithm to walk through an image matrix and translate it into motor code, that looks a lot like a video game control. Indeed, the stepperboi/autostepperboi main source code has utilities to manually control all three axes like a tiny claw machine :)
U - Pen Up
D - Pen Down
L - Pen Left
R - Pen Right
Y/T - Pen Forward (top)
B - Pen Backwards (bottom)
Z - zero the calibration
O - returned to previous zeroed position
## Challenges we ran into
* I have no idea about basic mechanics / manufacturing so it's pretty slipshod, the fractional resolution I managed to extract is impressive in its own right
* Designing my own 'gcode' simplification was a little complicated, and produces strange, pointillist results. I like it though.
## Accomplishments that we're proud of
* 24 hours and a pretty small cost in parts to make a functioning plotter!
* Connected to mapbox api and did image processing quite successfully, including machine code generation / interpretation
## What we learned
* You don't need to take MIE243 to do low precision work, all you need is superglue, a glue gun and a dream
* GPS modules are finnicky and need to be somewhat near to a window with built in antenna
* Vectorizing an image is quite a complex problem
* Mechanical engineering is difficult
* Steppers are *extremely* precise, and I am quite surprised at the output quality given that it's barely held together.
* Iteration for mechanical structure is possible, but difficult
* How to use rotary tool and not amputate fingers
* How to remove superglue from skin (lol)
## What's next for Cartoboy
* Compacting the design it so it can fit in a smaller profile, and work more like a polaroid camera as intended. (Maybe I will learn solidworks one of these days)
* Improving the gcode algorithm / tapping into existing gcode standard
|
## Inspiration
Sometimes in lecture you need to point to something tiny on the presentation but no one really knows what you're pointing to. So we decided to build something that can read where you are pointing using a camera, and points a laser in that direction, which makes engagement in lectures and presentations much more accessible. We also realized that this idea actually branches off into a lot of other potential accessibility applications; it allows robotic control with pure human actions as input. For example, it could help artists paint large canvases by painting at where they point, or even be used as a new type of remote control if we replaced the laser with an RF signal.
## What it does
It tracks where the user's forearm is using a fully custom-built computer vision object detection program. All dimensions of the forearm are approximated via only one camera and it is able to generate a set of projected XY coordinates on the supposed presentation for the laser to point to. This corresponds to where the user is pointing at.
## How we built it
We heavily used OpenCV in Python to built the entire computer vision framework, which was tied to a USB webcam. Generated projection points were sent to an ESP32 via wifi, which fed separate coordinates to a dual-servo motor system which then moves the laser pointer to the correct spot. This was done using arduino.
## Challenges we ran into
First of all, none of us had actually used OpenCV on a project this size, especially not for object tracking. This took a lot of learning on the spot, online tutorials and experimenting
There were also plenty of challenges that all revolved around the robustness of the system. Sometimes the contour detection software would detect multiple contours, so a challenge was finding a way to join them so the system wouldn't break. The projection system was quite off at the start, so a lot of manual tuning had to be done to fix that. The wifi data transmission also took a long time to figure out, as none of us had ever touched that stuff before.
## Accomplishments that we're proud of
We're quite proud of the fact that we were able to build a fully functional object tracking system without any premade online code in such a short amount of time, and how robust it was in action. It was also quite cool to see the motors react in real time to user input.
## What we learned
We learned some pretty advanced image processing and video capture techniques in OpenCV, and how to use the ESP32 controller to do stuff.
## What's next for Laser Larry
The biggest step is to make the projection system more accurate, as this would take a lot more tuning. Another camera also wouldn't hurt to get more accurate readings, and it would be cool to expand the idea to more accessibility applications discussed above.
|
winning
|
## Inspiration
2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today.
## What it does
A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life.
## How we built it
The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response.
## Challenges we ran into
We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input.
## Accomplishments that we're proud of
As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn!
## What we learned
We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep.
## What's next for ReCall: Memories done for you!
We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
|
## Inspiration
Have you ever met someone, but forgot their name right afterwards?
Our inspiration for INFU comes from our own struggles to remember every detail of every conversation. We all deal with moments of embarrassment or disconnection when failing to remember someone’s name or details of past conversations.
We know these challenges are not unique to us, but actually common across various social and professional settings. INFU was born to bridge the gap between our human limitations and the potential for enhanced interpersonal connections—ensuring no details or interactions are lost to memory again.
## What it does
By attaching a camera and microphone to a user, we can record different conversations with people by transcribing the audio and categorizing using facial recognition. With this, we can upload these details onto a database and have it summarised by an AI and displayed on our website and custom wrist wearable.
## How we built it
There are three main parts to the project. The first part is the hardware which includes all the wearable components. The second part includes face recognition and speech-to-text processing that receives camera and microphone input from the user's iPhone. The third part is storing, modifying, and retrieving data of people's faces, names, and conversations from our database.
The hardware comprises an ESP-32, an OLED screen, and two wires that act as touch buttons. These touch buttons act as record and stop recording buttons which turn on and off the face recognition and microphone. Data is sent wirelessly via Bluetooth to the laptop which processes the face recognition and speech data. Once a person's name and your conversation with them are extracted from the current data or prior data from the database, the laptop sends that data to the wearable and displays it using the OLED screen.
The laptop acts as the control center. It runs a backend Python script that takes in data from the wearable via Bluetooth and iPhone via WiFi. The Python Face Recognition library then detects the speaker's face and takes a picture. Speech data is subsequently extracted from the microphone using the Google Cloud Speech to Text API which is then parsed through the OpenAI API, allowing us to obtain the person's name and the discussion the user had with that person. This data gets sent to the wearable and the cloud database along with a picture of the person's face labeled with their name. Therefore, if the user meets the person again, their name and last conversation summary can be extracted from the database and displayed on the wearable for the user to see.
## Accomplishments that we're proud of
* Creating an end product with a complex tech stack despite various setbacks
* Having a working demo
* Organizing and working efficiently as a team to complete this project over the weekend
* Combining and integrating hardware, software, and AI into a project
## What's next for Infu
* Further optimizing our hardware
* Develop our own ML model to enhance speech-to-text accuracy to account for different accents, speech mannerisms, languages
* Integrate more advanced NLP techniques to refine conversational transcripts
* Improve user experience by employing personalization and privacy features
|
## Inspiration
The inspiration for GithubGuide came from our own experiences working with open-source projects and navigating through complex codebases on GitHub. We realized that understanding the purpose of each file and folder in a repository can be a daunting task, especially for beginners. Thus, we aimed to create a tool that simplifies this process and makes it easier for developers to explore and contribute to GitHub projects.
## What it does
GithubGuide is a Google Chrome extension that takes any GitHub repository as input and explains the purpose of each file and folder in the repository. It uses the GitHub API to fetch repository contents and metadata, which are then processed and presented in an easily understandable format. This enables developers to quickly navigate and comprehend the structure of a repository, allowing them to save time and work more efficiently.
## How we built it
We built GithubGuide as a team of four. Here's how we split the work among teammates 1, 2, 3, and 4:
1. Build a Chrome extension using JavaScript, which serves as the user interface for interacting with the tool.
2. Develop a comprehensive algorithm and data structures to efficiently manage and process the repository data and LLM-generated inferences.
3. Configure a workflow to read repository contents into our chosen LLM ChatGPT model using a reader built on LLaMa - a connector between LLMs and external data sources.
4. Build a server with Python Flask to communicate data between the Chrome extension and LLaMa, the LLM data connector.
## Challenges we ran into
Throughout the development process, we encountered several challenges:
1. Integrating the LLM data connector with the Chrome extension and the Flask server.
2. Parsing and processing the repository data correctly.
3. Engineering our ChatGPT prompts to get optimal results.
## Accomplishments that we're proud of
We are proud of:
1. Successfully developing a fully functional Chrome extension that simplifies the process of understanding GitHub repositories.
2. Overcoming the technical challenges in integrating various components and technologies.
3. Creating a tool that has the potential to assist developers, especially beginners, in their journey to contribute to open-source projects.
## What we learned
Throughout this project, we learned:
1. How to work with LLMs and external data connectors.
2. The intricacies of building a Chrome extension, and how developers have very little freedom when developing browser extensions.
3. The importance of collaboration, effective communication, and making sure everyone is on the same page within our team, especially when merging critically related modules.
## What's next for GithubGuide
We envision the following improvements and features for GithubGuide:
1. Expanding support for other browsers and platforms.
2. Enhancing the accuracy and quality of the explanations provided by ChatGPT.
3. Speeding up the pipeline.
4. Collaborating with the open-source community to further refine and expand the project.
|
winning
|
## Inspiration
As STEM students, many of us have completed online certification courses on various websites such as Udemy, Codeacademy, Educative, etc. Many classes on these sites provide the user with a unique certificate of completion after passing their course. We wanted to take the authentication of these digital certificates to the next level.
## What it does
Our application functions as a site similar to the ones mentioned earlier; providing users with a plethora of certified online courses, but what sets us apart is our creative use of web3, allowing users to access their certificates directly from the blockchain, guaranteeing their authenticity to the utmost degree.
## How we built it
For our frontend, we created out design in Figma and coded it using the Vue framework. Our backend was done in Python via the Flask framework. The database we used to store users and courses as SQLite. The certificate generation was accomplished in Python via the PILLOW library. To convert images in NFTs, we used Verbwire for their easy to use minting procedure.
## Challenges we ran into
We ran into quite a few challenges throughout our project. The first of which was the fact that none of us had any meaningful web3 experience . Luckily for us, Verbwire had a quite straightforward minting process and even generated some of the code for us.
## Accomplishments that we're proud of
Although our end result is not everything we dreamt of 24 hours ago, we are quite proud of what we were able to accomplish. We created quite an appealing website for our application. We creating a python script that generates custom certificates. We created a powerful backend capable of storing data for our users and courses.
## What we learned
For many of us, this was a new and unique collaborative experience in software development. We learned quite a bit on task distribution and optimization as well as key takeaways for creating code that is not only maintainable, but also transferable to other developers during the development process. More technically, we learned how to create simple databases via SQLite, we learned how to automate image generation via Python, and learned the steps of making a unique and appealing front-end design, starting from the prototype all the way to the final product.
## What's next for DiGiDegree
Moving forward, we would like to migrate our database to Postgres to handle higher traffic. We would also like to implement a Redis cache to improve hit-ratio and speed up search times. We also like to populate out website with more courses and improve our backend security by abstracting away SQL Queries to protect us further from SQL injection attacks.
|
## Inspiration
As a team we wanted to pursue a project that we could see as a creative solution to an important issue currently and may have a significant impact for the future. GM's sponsorship challenge provided us with the most exciting problem to tackle - overcoming the limitations in electric vehicle (EV) charging times. We as a team believe that EVs are the future in transportation and our project reflects those ideals.
## What it does
Due to the rapid adoption of EVs in the near future and the slower progress of charging station infrastructure, waiting for charging time could become a serious issue. Sharger is a mobile/web application that connects EV drivers with EV owners. If charging stations are full and waitlists are too long, EV drivers cannot realistically wait for other drivers for hours to finish charging. Hence, we connect them to EV owners who rent their charging stations from their home. Drivers have access to a map with markers of nearby homes made available by the owners. Through the app they can book availability at the homes and save time from waiting at public charging stations. Home owners are able to fully utilize their home chargers by using them during the night for their own EVs and renting them out to other EV owners during the day.
## How we built it
The app was written in JavaScript and built using React, Express, and MongoDB technologies . It starts with a homepage with a login and signup screen. From there, drivers can utilize the map that was developed with Google Cloud API . The map allows drivers to find nearby homes by displaying markers for all nearby homes made available by the owners. The drivers can then book a time slot. The owners have access to a separate page that allows them to list their homes similar to Airbnb's format. Instead of bedrooms and bathrooms though, they can list their charger type, charger voltage, bedding availability in addition to a photo of the home and address. Home owners have the option to make their homes available whenever they want. Making a home unavailable will remove the marker from the drivers' map.
## Challenges we ran into
As a team we faced many challenges both technical and non-technical. The concept of our project is complex so we were heavily constrained by time. Also all of us learned a new tool in order to adapt to our project requirements.
## Accomplishments that we're proud of
As a team we are really proud of our idea and our team effort. We truly believe that our idea, through its capability to utilize a convenient resource in unused home chargers, will help contribute to the widespread adoption of EVs in the near future. All of our team members worked very hard and learned new technologies and skills to overcome challenges, in order to develop the best possible product we could in our given time.
## What we learned
* Express.js
* Bootstrap
* Google Cloud API
## What's next for Sharger
* implement a secure authorization feature
* implement a built-in navigation system or outsource navigation to google maps
* outsource bedding feature to Airbnb
* home rating feature
* develop a bookmark feature for drivers to save home locations
* implement an estimated waiting time based off past home charging experiences
|
## What inspired us:
The pandemic has changed the university norm to being primarily all online courses, increasing our usage and dependency on textbooks and course notes. Since we are all computer science students, we have many math courses with several definitions and theorems to memorize. When listening to a professor’s lecture, we often forget certain theorems that are being referred to. With discussAI, we are easily able to query the postgresql database with a command and receive an image from the textbook explaining what the definition/theorem is. Thus, we decided to use our knowledge with machine learning libraries to filter out these pieces of information.
We believe that our program’s concept can be applied to other fields, outside of education. For instance, business meetings or training sessions can utilize these tools to effectively summarize long manuals and to search for keywords.
## What we learned:
We had a lot of fun building this application since we were new to using Microsoft Azure applications. We learned how to integrate machine learning libraries such as OCR and sklearn for processing our information, and we deepened our knowledge in frontend (Angular.js) and backend(Django and Postgres).
## How we built it:
We built our web application’s frontend using Angular.js to build our components and Agora.io to allow video conferencing. On our backend, we used Django and Postgresql for handling API requests from our frontend. We also used several Python libraries to convert the pdf file to png images, utilize Azure OCR to analyze these text images, apply the sklearn library to analyze the individual text, and finally crop the images to return specific snippets of definitions/theorems.
## Challenges we faced:
The most challenging part was deciding our ML algorithm to derive specific image snippets from lengthy textbooks. Some other challenges we faced varies from importing images from Azure Storage to positioning CSS components. Nevertheless, the learning experience was amazing with the help of mentors, and we hope to participate again in the future!
|
partial
|
## Inspiration
Have you ever wished to give a memorable dining experience to your loved ones, regardless of their location? We were inspired by the desire to provide our friends and family with a taste of our favorite dining experiences, no matter where they might be.
## What it does
It lets you book and pay for a meal of someone you care about.
## How we built it
Languages:-
Javascript, html, mongoDB, Aello API
Methodologies:-
- Simple and accessible UI
- database management
- blockchain contract validation
- AI chatBot
## Challenges we ran into
1. We have to design the friendly front-end user interface for both customers and restaurant partner which of them have their own functionality. Furthermore, we needed to integrate numerous concepts into our backend system, aggregating information from various APIs and utilizing Google Cloud for the storage of user data.
2. Given the abundance of information requiring straightforward organization, we had to carefully consider how to ensure an efficient user experience.
## Accomplishments that we're proud of
We have designed the flow of product development that clearly show us the potential of idea that able to scale in the future.
## What we learned
3. System Design: Through this project, we have delved deep into the intricacies of system design. We've learned how to architect and structure systems efficiently, considering scalability, performance, and user experience. This understanding is invaluable as it forms the foundation for creating robust and user-friendly solutions.
4. Collaboration: Working as a team has taught us the significance of effective collaboration. We've realized that diverse skill sets and perspectives can lead to innovative solutions. Communication, coordination, and the ability to leverage each team member's strengths have been essential in achieving our project goals.
5. Problem-Solving: Challenges inevitably arise during any project. Our experiences have honed our problem-solving skills, enabling us to approach obstacles with creativity and resilience. We've learned to break down complex issues into manageable tasks and find solutions collaboratively.
6. Adaptability: In the ever-evolving field of technology, adaptability is crucial. We've learned to embrace new tools, technologies, and methodologies as needed to keep our project on track and ensure it remains relevant in a dynamic landscape.collaborative as a team.
## What's next for Meal Treat
We want to integrate more tools for personalization, including a chatbot that supports customers in RSVPing their spot in the restaurant. This chatbot, utilizing Google Cloud's Dialogflow, will be trained to handle scheduling tasks. Next, we also plan to use Twilio's services to communicate with our customers through text SMS. Last but not least, we expect to incorporate blockchain technology to encrypt customer information, making it easier for the restaurant to manage and enhance protection, especially given our international services. Lastly, we aim to design an ecosystem that enhances the dining experience for everyone and fosters stronger relationships through meal care.
|
## Inspiration
Studies have shown that music-based interventions may reduce stress-related symptoms, prevalent in many mental and physical health conditions such as cardiovascular diseases and anxiety disorders. Inspired by the potential advantages of auditory stimulation, ♥BeatBops aims to improve music recommendations and provide relief by emulating music therapy techniques, using musical elements such as rhythm, tempo, dynamics, and pitch.
References:
Allarakha, S. (n.d.). What Is Heart Rate Variability (HRV)? Normal, High, and Low (P. Suyog Uttekar, Ed.). MedicineNet. <https://www.medicinenet.com/what_is_heart_rate_variability_hrv/article.htm>
De Witte M, Pinho ADS, Stams GJ, Moonen X, Bos AER, van Hooren S. Music therapy for stress reduction: a systematic review and meta-analysis. Health Psychol Rev. 2022;16(1):134-159. doi:10.1080/17437199.2020.1846580
Kulinski J, Ofori EK, Visotcky A, Smith A, Sparapani R, Fleg JL. Effects of music on the cardiovascular system. Trends Cardiovasc Med. 2022;32(6):390-398. doi:10.1016/j.tcm.2021.06.004
## What it does
BeatBops uses the Spotify API to create music recommendations based on references such as genre, danceability, energy scores, and more to generate music suited to an individual's heart rate variability (HRV) data, a measure of variations between successive heartbeats. HRV may be used as an observational indicator of health - a low HRV suggests a fight-or-flight response, possibly triggered by negative emotions, while a high HRV, indicates a relaxation state, possibly suggesting a greater cardiovascular health and resilience to stress.
## How we built it
We generated song recommendations using Spotify API, and built the backend app through Python, using its micro web framework Flask to implement a web app that executes the Python code. We also used HTML to create and style the display of the application.
## Challenges we ran into
Along the way, we ran into some challenges such as restrictions in time and limited experience on the tools we used. Our initial intention was to obtain live heart rate variability data from the Health App from Apple Watches but our attempts were unsuccessful. Further, we struggled to classify HRV levels with Spotify's track audio features due to the current limited research on the effect of music on cardiovascular health.
## Accomplishments that we're proud of
One accomplishment that we are proud of is making a project that performs both on the front end and the back end. For all of our teammates, it was our first time learning new tools such as Flask and Spotify API and creating an app that incorporates any API. We are also proud to have spent time designing our app. We take pride in learning this new knowledge and skills that we will continue to practice in the future.
## What we learned
Through this project, we learned how important it is to nail down the idea first and foremost, as this possibly would be the most crucial step in starting a hack. We also learned to apply our current knowledge of Python and HTML to create our app collaboratively, and all team members have expanded their coding skills by learning new frameworks and APIs.
## What's next for ♥BeatBops
In the future, we would like to develop ♥BeatBops further to use real heart rate variability data from smartwatches to generate the tracks and create a user-friendly interface that can be deployed as a smartwatch app (i.e. WatchOS app). This way, users can use dependable heart rate variability data to generate their own personalized music recommendations. We would also like to do further research on the current literature surrounding the influence of auditory stimulations on physical and mental health to better generate song recommendations that are of benefit to the users.
|
## INSPIRATION:
**Have you ever been on a lunch break at work, or passed by a restaurant and really wanted to try it, but had no one to go with at the time? Have some anxiety about eating alone?** We here at team TableForTwo understand the feeling. So we built an app that helps people find an eating buddy, and potentially build some new social connections!
After all, food tastes better when it's shared, right?
## WHAT IT DOES:
Search restaurants, see who's searching for a buddy, and make a reservation to connect with a fellow lone eater. Connect with Facebook to get going, and browse restaurants in your local vicinity. Clicking a restaurant will pull up a reservation modal where you can submit a reservation request or view the information tags for the joint.
Once someone views your request and accepts it, a 'reservation' will be made (no real reservation made at the restaurant though), and you two will be connected!
Now **get eating**.
## How we built it:
HTML, CSS, Node,js, Angular
## Challenges we ran into:
We hit some limits on APIs, then APIs went down...
## What we learned:
All The Things.
|
losing
|
## Our Inspiration
Every week, researchers invest significant time staying updated with the latest scientific publications. The painstaking process of searching for relevant papers diverts valuable time from their core research activities, potentially hindering scientific advancement. Over a year, this translates into considerable time and financial inefficiencies.
We believe that by leveraging artificial intelligence, our project can profoundly enhance the efficiency of this process. We have developed an AI-powered recommendation system that adeptly identifies and suggests high-quality, pertinent research papers.
Moreover, our system customizes its recommendations by analyzing each researcher's reading history, guaranteeing both relevance and significance. Notably, our platform delivers recommendations on a per-project basis. This means that even if a researcher is juggling multiple projects simultaneously, our recommendations remain distinct and organized, preventing any overlap or confusion.
## What it does
### ResearchRadar.ai's features:
* Using our AI-powered recommendation system, we help users find high-quality relevant papers
* After a user finishes reading a paper, our system will recommend new research papers
* Search papers with AI reordering using Metaphor’s API
* Save research papers to a reading list
* See important keywords for each paper based on the YAKE Python library
* Annotate and view PDFs
* Create multiple projects to maintain an organized reading experience
## How we built it
AI-powered Recommendation system: We utilized the state-of-the-art “Neural Collaborative Filtering” algorithm through a Python library, which extends matrix factorization using a multi-layer perceptron. This approach enables us to recommend papers that align with the broader consensus of the research community.
In addition, we used the following tech stack to build our product.
**Frontend:** JavaScript, Chakra UI, React.js, Next.js
**Backend:** Python, Metaphor API, Microsoft Recommender library, PyTorch, Flask, YAKE (Yet Another Keyword Extractor Python library), Git
## Challenges we ran into
While building ResearchRadar.ai, we ran into a few technical challenges, but we resolved them after ample brainstorming among ourselves and mentorship from the hackathon experts. Firstly, we needed to rapidly learn new technologies without having previous exposure. For example, some of us did not have experience working with Flask or Chakra UI, but we were able to pick up these technologies quickly.
Secondly, a major challenge we encountered was integrating the frontend and backend together. Specifically, we found it difficult to implement the machine learning model because it took a fair amount of time to run the model, which led us to asynchronously running the ML model, causing a some syncronization issues.
## Accomplishments that we're proud of
Our group is extremely proud of the fact that we were able to get the recommender system to function properly. The ML aspects were technically challenging, and this feature served as the basis for our project. This feature was our MVP, and our hard work and dedication to the project was instrumental in achieving our project goal. Furthermore, when we successfully integrated the frontend and the backend together, we felt accomplished to present a working product for users to enjoy.
## What we learned
We learned the common techniques and algorithms used to form the basis of recommender systems. In learning about the differences between content-based filtering versus collaborative filtering, we discussed the pros and cons of each one, and ultimately decided to implement collaborative filtering for our recommender system. We also learned about the flow of data and how data is managed between the frontend and backend portions.
## What's next for ResearchRadar.ai
For future features, we hope to add a community feature to provide researchers the platform and exciting opportunity to discuss papers and voice their opinions. Additionally, we hope to add a feature that allows scientists to “tag” papers, thus providing the ability to categorize research papers by topics. We believe that our platform can be a one-stop shop for researchers to be able to catch-up on latest trends and further their knowledge.
|
## Inspiration
One of my team members, Thor, was looking for citations for his psychology research paper while on the bus ride up from SoCal. He had a psychology paper due and as he scrambled to finish the assignment, he struggled to find articles to cite due to the fragmented landscape of paywalled academic journals and lackluster indexing. We looked into his problem and realized that research is very unfriendly to those without the means.
Access to Academic journals are expensive (>$500) for one person and the biggest commercial indexers such as Web of Science or Scopus will charge you for using their search. Open source alternatives such as IEEEExplore and arXiv either do not have the breadth of research or lack vetting as preprints. A further exigence is the detrimental nature of science journalism that fuels a cycle of misleading publications. When scientists are at the mercy of research journals to expose their work for grants and journals need clicks to drive revenue, the scientific community as a whole pays the price. This felt like an issue that needed to be tackled.
zKnowledgeBase is a decentralized research platform that eliminates paywalls, enables free sharing of verified research without third-party control, and mitigates censorship risks - empowering academics with open and unbiased access to knowledge.
## What it does
We built a decentralized web platform allowing users to search and store research articles, immutably and forever. Users can upload research PDFs and search for articles using our vector-embedded search. We secure our articles with a Merkle Tree, where the root is publicly available on the Avalanche Blockchain.
## How we built it
We allow users to upload research papers in PDF form. We store their submissions on IPFS, a distributed ledger designed for file storage, allowing for reliable uptime and free access. At the same time, we chunk and vector embed the paper, using LangChain and together.ai to render the paper into a multiple hundred dimensional vector, stored in ChromaDB's vector database. When the user searches with our platform, we vector embed their search query and use cosine similarity to compare the search vector to the stored vectors in the vector database. We then present the most similar papers in a scrollable format. Finally, we used a Merkle tree built with Zig for submission security and to verify that papers retrieved from IPFS came from our uploads.
## Challenges we ran into
We used a lot of new technologies for the first time including Merkle Trees, Zig, Vector Databases, and we had to read a lot of documentation and learn quickly to finish on time. The integration of the front end and backend took some time.
## Accomplishments that we're proud of
We used Zig for the first time and managed to code a complicated Delta Merkle Proof quickly and correctly despite time constraints. We were able to follow a plan from ideation to submission.
## What we learned
We learned that it is important to budget your time wisely and spend time with system design.
## What's next for zKnowledge Base:
RICHER METADATA: Publication, # of Citations, etc
INCENTIVIZE UPLOADS: Spread the word and get more engagement
TOKENS: Distribute tokens for decentralized governance
|
## Inspiration
There has never been a more relevant time in political history for technology to shape our discourse. Clara AI can help you understand what you're reading, giving you political classification and sentiment analysis so you understand the bias in your news.
## What it does
Clara searches for news on an inputted subject and classifies its political leaning and sentiment. She can accept voice commands through our web application, searching for political news on a given topic, and if further prompted, can give political and sentiment analysis. With 88% accuracy on our test set, Clara is nearly perfect at predicting political leaning. She was trained using random forest and many hours of manual classification. Clara gives sentiment scores with the help of IBM Watson and Google Sentiment Analysis APIs.
## How we built it
We built a fundamental technology using a plethora of Google Cloud Services on the backend, trained a classifier to identify political leanings, and then created multiple channels for users to interact with the insight generated by our algorithms.
For our backend, we used Flask + Google Firebase. Within Flask, we used the Google Search Engine API, Google Web Search API, Google Vision API, and Sklearn to conduct analysis on the news source inputted by the user.
For our web app we used React + Google Cloud Speech Recognition API (the app responds to voice commands). We also deployed a Facebook Messenger bot, as many of our users find their news on Facebook.
## Challenges we ran into
Lack of wifi was the biggest, putting together all of our APIs, training our ML algorithm, and deciding on a platform for interaction.
## Accomplishments that we're proud of
We've created something really meaningful that can actually classify news. We're proud of the work we put in and our persistence through many caffeinated hours. We can't wait to show our project to others who are interested in learning more about their news!
## What we learned
How to integrate Google APIs into our Flask backend, and how to work with speech capability.
## What's next for Clara AI
We want to improve upon the application by properly distributing it to the right channels. One of our team members is part of a group of students at UC Berkeley that builds these types of apps for fun, including BotCheck.Me and Newsbot. We plan to continue this work with them.
|
partial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.