hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
9,982
https://devpost.com/software/flora
Homepage - Wireframe About Page - Wireframe Community Landing Page - Wireframe Open Chatbot Page - Wireframe Forum Post Page - Wireframe flora Flora is a community-based website dedicated to breaking the taboo around womxn’s sexual & reproductive health. We believe that womxn should be able to get the care they need without fear of judgment from others. We know how embarrassing and difficult it can be to talk about these topics - trust us, we’ve been there. The Flora team consists of four driven and passionate women - Anika, Doris, Deborah and Isabel. This project was inspired by two larger, current problems - womxn's inability to have regular checkups with gynecologists due to the COVID-19 pandemic; and womxn's health being overlooked in school education and deemed "inappropriate" to talk about. Being womxn of color, we experience additional obstacles related to our sexual & reproductive health. From stigmas within POC families and communities, to the implicit biases against POC womxn perpetuated by health professionals, we are extremely aware that these problems still exist in even the most developed societies. We created Flora to address these issues and create a safe space for all womxn to talk about problems they have related to their sexual and reproductive health. One of the biggest challenges we faced as a team was narrowing down our idea so that is both unique and effective. The broader topic we started with was womxn’s health, and it was hard to narrow it down because there’s just so many ways we can approach that. We originally wanted the website to be more educational, but we didn’t know how to deliver information without just regurgitating words from Google searches. Ultimately, we decided to create a community-based platform so that people & doctors can directly interact with others. Our second biggest challenge was learning how to use new technology - while all four of us have had coding experience, we decided to branch out beyond our comfort zones and use new tools such as Dialogflow. While we struggled at times, we were able to create a functional chatbot and website. Overall, this experience was amazing for the four of us, and we'll always remember it. Built With bootstrap css git github google-cloud html javascript react Try it out github.com
Flora
Flora is a safe space for womxn to ask their most personal sexual health questions & get answers from professionals - all from the comfort of their own homes!
['Isabel Ojeda', 'Doris Zhou', 'Deborah Mepaiyeda', 'Anika Prova']
['Accenture: Most Innovative Use of Tech for Good']
['bootstrap', 'css', 'git', 'github', 'google-cloud', 'html', 'javascript', 'react']
7
9,982
https://devpost.com/software/aibo
Video chatting capabilities with the Twilio API Dashboard Matches Dendrogram visualization Scatter Plot of Clustering Algorithm Schedule SlackBot Kudos Bot Meeting Bot OUR VIDEO IS IN THE COMMENTS!! THANKS FOR UNDERSTANDING (WIFI ISSUES) Inspiration As a group of four students having completed 4 months of online school, going into our second internship and our first fully remote internship, we were all nervous about how our internships would transition to remote work. When reminiscing about pain points that we faced in the transition to an online work term this past March, the one pain point that we all agreed on was a lack of connectivity and loneliness. Trying to work alone in one's bedroom after experiencing life in the office where colleagues were a shoulder's tap away for questions about work, and the noise of keyboards clacking and people zoned into their work is extremely challenging and demotivating, which decreases happiness and energy, and thus productivity (which decrease energy and so on...). Having a mentor and steady communication with our teams is something that we all valued immensely during our first co-ops. In addition, some of our works had designated exercise times, or even pre-planned one-on-one activities, such as manager-coop lunches, or walk breaks with company walking groups. These activities and rituals bring structure into a sometimes mundane day which allows the brain to recharge and return back to work fresh and motivated. Upon the transition to working from home, we've all found that somedays we'd work through lunch without even realizing it and some days we would be endlessly scrolling through Reddit as there would be no one there to check in on us and make sure that we were not blocked. Our once much-too-familiar workday structure seemed to completely disintegrate when there was no one there to introduce structure, hold us accountable and gently enforce proper, suggested breaks into it. We took these gestures for granted in person, but now they seemed like a luxury- almost impossible to attain. After doing research, we noticed that we were not alone: A 2019 Buffer survey asked users to rank their biggest struggles working remotely. Unplugging after work and loneliness were the most common (22% and 19% respectively) https://buffer.com/state-of-remote-work-2019 We set out to create an application that would allow us to facilitate that same type of connection between colleagues and make remote work a little less lonely and socially isolated. We were also inspired by our own online term recently, finding that we had been inspired and motivated when we were made accountable by our friends through usage of tools like shared Google Calendars and Notion workspaces. As one of the challenges we'd like to enter for the hackathon, the 'RBC: Most Innovative Solution' in the area of helping address a pain point associated with working remotely in an innovative way truly encaptured the issue we were trying to solve perfectly. Therefore, we decided to develop aibo, a centralized application which helps those working remotely stay connected, accountable, and maintain relationships with their co-workers all of which improve a worker's mental health (which in turn has a direct positive affect on their productivity). What it does Aibo, meaning "buddy" in Japanese, is a suite of features focused on increasing the productivity and mental wellness of employees. We focused on features that allowed genuine connections in the workplace and helped to motivate employees. First and foremost, Aibo uses a matching algorithm to match compatible employees together focusing on career goals, interests, roles, and time spent at the company following the completion of a quick survey. These matchings occur multiple times over a customized timeframe selected by the company's host (likely the People Operations Team), to ensure that employees receive a wide range of experiences in this process. Once you have been matched with a partner, you are assigned weekly meet-ups with your are partner to build that connection. Using Aibo, you can video call with your partner and start creating a To-Do list with your partner and by developing this list together, you can bond over the common tasks to perform despite potentially having seemingly very different roles. Partners would have 2 meetings a day, once in the morning where they would go over to-do lists and goals for the day, and once in the evening in order to track progress over the course of that day and tasks that need to be transferred over to the following day. How We built it This application was built with React, Javascript and HTML/CSS on the front-end along with Node.js and Express on the back-end. We used the Twilio chat room API along with Autocode to store our server endpoints and enable a Slack bot notification that POSTs a message in your specific buddy Slack channel when your buddy joins the video calling room. In total, we used 4 APIs/ tools for our project. Twilio chat room API Autocode API Slack API for the Slack bots Microsoft Azure to work on the machine learning algorithm When we were creating our buddy app, we wanted to find an effective way to match partners together. After looking over a variety of algorithms, we decided on the K-means clustering algorithm. This algorithm is simple in its ability to group similar data points together and discover underlying patterns. The K-means will look for a set amount of clusters within the data set. This was my first time working with machine learning but luckily, through Microsoft Azure, I was able to create a working training and interference pipeline. The dataset marked the user’s role and preferences and created n/2 amount of clusters where n are the number of people searching for a match. This API was then deployed and tested on web server. Although, we weren't able to actively test this API on incoming data from the back-end, this is something that we are looking forward to implementing in the future. Working with ML was mainly trial and error, as we have to experiment with a variety of algorithm to find the optimal one for our purposes. Upon working with Azure for a couple of hours, we decided to pivot towards leveraging another clustering algorithm in order to group employees together based on their answers to the form they fill out when they first sign up on the aido website. We looked into the PuLP, a python LP modeler, and then looked into hierarchical clustering. This seemed similar to our initial approach with Azure, and after looking into the advantages of this algorithm over others for our purpose, we decided to chose this one for the clustering of the form responders. Some pros of hierarchical clustering include: Do not need to specify the number of clusters required for the algorithm- the algorithm determines this for us which is useful as this automates the sorting through data to find similarities in the answers. Hierarchical clustering was quite easy to implement as well in a Spyder notebook. the dendrogram produced was very intuitive and helped me understand the data in a holistic way The type of hierarchical clustering used was agglomerative clustering, or AGNES. It's known as a bottom-up algorithm as it starts from a singleton cluster then pairs of clusters are successively merged until all clusters have been merged into one big cluster containing all objects. In order to decide which clusters had to be combined and which ones had to be divided, we need methods for measuring the similarity between objects. I used Euclidean distance to calculate this (dis)similarity information. This project was designed solely using Figma, with the illustrations and product itself designed on Figma. These designs required hours of deliberation and research to determine the customer requirements and engineering specifications, to develop a product that is accessible and could be used by people in all industries. In terms of determining which features we wanted to include in the web application, we carefully read through the requirements for each of the challenges we wanted to compete within and decided to create an application that satisfied all of these requirements. After presenting our original idea to a mentor at RBC, we had learned more about remote work at RBC and having not yet completed an online internship, we learned about the pain points and problems being faced by online workers such as: Isolation Lack of feedback From there, we were able to select the features to integrate including: Task Tracker, Video Chat, Dashboard, and Matching Algorithm which will be explained in further detail later in this post. Technical implementation for AutoCode: Using Autocode, we were able to easily and successfully link popular APIs like Slack and Twilio to ensure the productivity and functionality of our app. The Autocode source code is linked before: Autocode source code here: https://autocode.com/src/mathurahravigulan/remotework/ Creating the slackbot const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) if (context.params.StatusCallbackEvent === 'room-created') { await lib.slack.channels['@0.7.2'].messages.create({ channel: `#buddychannel`, text: `Hey! Your buddy started a meeting! Hop on in: https://aibo.netlify.app/ and enter the room code MathurahxAyla` }); } // do something let result = {}; // **THIS IS A STAGED FILE** // It was created as part of your onboarding experience. // It can be closed and the project you're working on // can be returned to safely - or you can play with it! result.message = `Welcome to Autocode! 😊`; return result; }; Connecting Twilio to autocode const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); const twilio = require('twilio'); const AccessToken = twilio.jwt.AccessToken; const { VideoGrant } = AccessToken; const generateToken =() => { return new AccessToken( process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_API_KEY, process.env.TWILIO_API_SECRET ); }; const videoToken = (identity, room) => { let videoGrant; if (typeof room !== 'undefined') { videoGrant = new VideoGrant({ room }); } else { videoGrant = new VideoGrant(); } const token = generateToken(); token.addGrant(videoGrant); token.identity = identity; return token; }; /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) const identity = context.params.identity; const room = context.params.room; const token = videoToken(identity, room); return { token: token.toJwt() } }; From the product design perspective, it is possible to explain certain design choices: https://www.figma.com/file/aycIKXUfI0CvJAwQY2akLC/Hack-the-6ix-Project?node-id=42%3A1 As shown in the prototype, the user has full independence to move through the designs as one would in a typical website and this supports the non sequential flow of the upper navigation bar as each feature does not need to be viewed in a specific order. As Slack is a common productivity tool in remote work and we're participating in the Autocode Challenge, we chose to use Slack as an alerting feature as sending text messages to phone could be expensive and potentially distract the user and break their workflow which is why Slack has been integrated throughout the site. The to-do list that is shared between the pairing has been designed in a simple and dynamic way that allows both users to work together (building a relationship) to create a list of common tasks, and duplicate this same list to their individual workspace to add tasks that could not be shared with the other (such as confidential information within the company) In terms of the overall design decisions, I made an effort to create each illustration from hand simply using Figma and the trackpad on my laptop! Potentially a non-optimal way of doing so, but this allowed us to be very creative in our designs and bring that individuality and innovation to the designs. The website itself relies on consistency in terms of colours, layouts, buttons, and more - and by developing these components to be used throughout the site, we've developed a modern and coherent website. Challenges We ran into Some challenges that we ran into were: Using data science and machine learning for the very first time ever! We were definitely overwhelmed by the different types of algorithms out there but we were able to persevere with it and create something amazing. React was difficult for most of us to use at the beginning as only one of our team members had experience with it. But by the end of this, we all felt like we were a little more confident with this tech stack and front-end development. Lack of time - there were a ton of features that we were interested in (like user authentication and a Google calendar implementation) but for the sake of time we had to abandon those functions and focus on the more pressing ones that were integral to our vision for this hack. These, however, are features I hope that we can complete in the future. We learned how to successfully scope a project and deliver upon the technical implementation. Accomplishments that We're proud of Created a fully functional end-to-end full stack application incorporating both the front-end and back-end to enable to do lists and the interactive video chat that can happen between the two participants. I'm glad I discovered Autocode which made this process simpler (shoutout to Jacob Lee - mentor from Autocode for the guidance) Solving an important problem that affects an extremely large amount of individuals- according to tnvestmentexecutive.com: StatsCan reported that five million workers shifted to home working arrangements in late March. Alongside the 1.8-million employees who already work from home, the combined home-bound employee population represents 39.1% of workers. https://www.investmentexecutive.com/news/research-and-markets/statscan-reports-numbers-on-working-from-home/ From doing user research we learned that people can feel isolated when working from home and miss the social interaction and accountability of a desk buddy. We're solving two problems in one, tackling social problems and increasing worker mental health while also increasing productivity as their buddy will keep them accountable! Creating a working matching algorithm for the first time in a time crunch and learning more about Microsoft Azure's capabilities in Machine Learning Creating all of our icons/illustrations from scratch using Figma! What We learned How to create and trigger Slack bots from React How to have a live video chat on a web application using Twilio and React hooks How to use a hierarchical clustering algorithm (agglomerative clustering algorithms) to create matches based on inputted criteria How to work remotely in a virtual hackathon, and what tools would help us work remotely! What's next for aibo We're looking to improve on our pairing algorithm. I learned that 36 hours is not enough time to create a new Tinder algorithm and that other time these pairing can be improved and perfected. We're looking to code more screens and add user authentication to the mix, and integrate more test cases in the designs rather than using Figma prototyping to prompt the user. It is important to consider the security of the data as well, and that not all teams can discuss tasks at length due to specificity. That is why we encourage users to create a simple to do list with their partner during their meeting, and use their best judgement to make it vague. In the future, we hope to incorporate machine learning algorithms to take in the data from the user knowing whether their project is NDA or not, and if so, as the user types it can provide warnings for sensitive information. Add a dashboard! As can be seen in the designs, we'd like to integrate a dashboard per user that pulls data from different components of the website such as your match information and progress on your task tracker/to-do lists. This feature could be highly effective to optimize productivity as the user simply has to click on one page and they'll be provided a high level explanation of these two details. Create our own Slackbot to deliver individualized Kudos to a co-worker, and pull this data onto a Kudos board on the website so all employees can see how their coworkers are being recognized for their hard work which can act as a motivator to all employees. Built With apis azure bootstrap css3 express.js figma html5 ml node.js pulp python react slack spyder twilio Try it out aibo.netlify.app github.com
aibo
Build genuine connections at work with aibo.
['Ayla Orucevic', 'Mathurah Ravigulan', 'Julia Turner']
['RBC: Most Innovative Solution']
['apis', 'azure', 'bootstrap', 'css3', 'express.js', 'figma', 'html5', 'ml', 'node.js', 'pulp', 'python', 'react', 'slack', 'spyder', 'twilio']
8
9,982
https://devpost.com/software/elevate-lfeun9
Problem and Challenge Achieving 100% financial inclusion where all have access to financial services still remains a difficult challenge. Particularly, a huge percentage of the unbanked adults comprise of women [1]. There are various barriers worldwide that prevent women from accessing formal financial services, including lower levels of income, lack of financial literacy, time and mobility constraints as well as cultural constraints and an overall lack of gender parity [1]. With this problem present, our team wanted to take on Scotiabank's challenge to build a FinTech tool/hack for females. Our Inspiration Inspired by LinkedIn, Ten Thousands Coffee, and Forte Foundation, we wanted to build a platform that combines networking opportunities, mentorship programs, and learning resources of personal finance management and investment opportunities to empower women in managing their own finance; thereby increasing financial inclusion of females. What it does The three main pillars of Elevate consists of safe community, continuous learning, and mentor support with features including personal financial tracking. Continuous Learning Based on the participant's interests, the platform will suggest suitable learning tracks that are available on the current platform. The participant will be able to keep track of their learning progress and apply the lessons learned in real life, for example, tracking their personal financial activity. Safe Community The Forum will allow participants to post questions from their learning tracks, current financial news, or discuss any relevant financial topics. Upon signing up, mentors and mentees should abide by the guidelines for respectful and appropriate interactions between parties. Account will be taken off if any events occur. Mentor Support Elevate pairs the participant with a mentor that has expertise in the area that the mentor wishes to learn more about. The participant can schedule sessions with the mentor to discuss financial topics that they are insecure about, or discuss questions they have about their lessons learned on the Elevate platform. Personal Financial Activity Tracking Elevate participants will be able to track their financial expenses. They would receive notifications and analytics results to help them achieve their financial goals. How we built it Before we started implementing, we prototyped the workflow with Webflow. We then built the platform as a web application using html, CSS, JavaScript, and collaborated real-time using git. Challenges we ran into There are a lot of features to incorporate. However we were able to demonstrate the core concepts of our project - to make financing more inclusive. Accomplishments that we're proud of The idea of incorporating several features into one platform. Deployed a demo web application. The sophisticated design of the interface and flow of navigation. What we learned We learned about the gender parity in finance, and how technology can remove the barriers and create a strong and supportive community for all to understand the important role that finance plays in their lives. What's next for Elevate Partner with financial institutions to create and curate a list of credible learning tracks/resources for mentees Recruit financial experts as mentees to help enable the program Add credit/debit cards onto the system to make financial tracking easier. Security issues should be addressed. Strengthen and implement the backend of the platform to include: Instant messaging, admin page to monitor participants Resources and Citation [1] 2020. REMOVING THE BARRIERS TO WOMEN’S FINANCIAL INCLUSION. [ebook] Toronto: Toronto Centre. Available at: https://res.torontocentre.org/guidedocs/Barriers%20to%20Womens%20Financial%20Inclusion.pdf . Built With css glitch html javascript Try it out elevate-platform.glitch.me
Elevate
Platform to elevate and promote female financial inclusion.
['Catherine Yeh', 'Tiffany Yeh']
['Scotiabank: Best FinTech Hack for Young Women']
['css', 'glitch', 'html', 'javascript']
9
9,982
https://devpost.com/software/hackthe6ix-globalcatchchat
Play catch with just about anything Catch! (Around the World) Our Inspiration Catch has to be one of our most favourite childhood games. Something about just throwing and receiving a ball does wonders for your serotonin. Since all of our team members have relatives throughout the entire world, we thought it'd be nice to play catch with those relatives that we haven't seen due to distance. Furthermore, we're all learning to social distance (physically!) during this pandemic that we're in, so who says we can't we play a little game while social distancing? What it does Our application uses AR and Unity to allow you to play catch with another person from somewhere else in the globe! You can tap a button which allows you to throw a ball (or a random object) off into space, and then the person you send the ball/object to will be able to catch it and throw it back. We also allow users to chat with one another using our web-based chatting application so they can have some commentary going on while they are playing catch. How we built it For the AR functionality of the application, we used Unity with ARFoundations and ARKit/ARCore . To record the user sending the ball/object to another user, we used a Firebase Real-time Database back-end that allowed users to create and join games/sessions and communicated when a ball was "thrown". We also utilized EchoAR to create/instantiate different 3D objects that users can choose to throw. Furthermore for the chat application, we developed it using Python Flask , HTML and Socket.io in order to create bi-directional communication between the web-user and server. Challenges we ran into Initially we had a separate idea for what we wanted to do in this hackathon. After a couple of hours of planning and developing, we realized that our goal is far too complex and it was too difficult to complete in the given time-frame. As such, our biggest challenge had to do with figuring out a project that was doable within the time of this hackathon. This also ties into another challenge we ran into was with initially creating the application and the learning portion of the hackathon. We did not have experience with some of the technologies we were using, so we had to overcome the inevitable learning curve. There was also some difficulty learning how to use the EchoAR api with Unity since it had a specific method of generating the AR objects. However we were able to use the tool without investigating too far into the code. Accomplishments Working Unity application with AR Use of EchoAR and integrating with our application Learning how to use Firebase Creating a working chat application between multiple users Built With ar arcore arfoundation arkit echoar firebase gcp html python socket.io unity Try it out github.com github.com drive.google.com
Catch! (Around-The-World)
Play catch, the game everyone loves with anybody around the world while socially distancing!
['Adil Kapadia', 'Tapish Jain', 'Jay Bhagat', 'Yathindra Shivakumar']
['echoAR: Best AR/VR application using the echoAR platform']
['ar', 'arcore', 'arfoundation', 'arkit', 'echoar', 'firebase', 'gcp', 'html', 'python', 'socket.io', 'unity']
10
9,982
https://devpost.com/software/inspirear
Task-page. User registration. Prize claim. Back end database. Model rendered using AR within the companion app. Inspiration People struggle to work effectively in a home environment, so we were looking for ways to make it more engaging. Our team came up with the idea for InspireAR because we wanted to design a web app that could motivate remote workers be more organized in a fun and interesting way. Augmented reality seemed very fascinating to us, so we came up with the idea of InspireAR. What it does InspireAR consists of the website, as well as a companion app. The website allows users to set daily goals at the start of the day. Upon completing all of their goals, the user is rewarded with a 3-D object that they can view immediately using their smartphone camera. The user can additionally combine their earned models within the companion app. The app allows the user to manipulate the objects they have earned within their home using AR technology. This means that as the user completes goals, they can build their dream office within their home using our app and AR functionality. How we built it Our website is implemented using the Django web framework. The companion app is implemented using Unity and Xcode. The AR models come from echoAR. Languages used throughout the whole project consist of Python, HTML, CSS, C#, Swift and JavaScript. Challenges we ran into Our team faced multiple challenges, as it is our first time ever building a website. Our team also lacked experience in the creation of back end relational databases and in Unity. In particular, we struggled with orienting the AR models within our app. Additionally, we spent a lot of time brainstorming different possibilities for user authentication. Accomplishments that we're proud of We are proud with our finished product, however the website is the strongest component. We were able to create an aesthetically pleasing , bug free interface in a short period of time and without prior experience. We are also satisfied with our ability to integrate echoAR models into our project. What we learned As a team, we learned a lot during this project. Not only did we learn the basics of Django, Unity, and databases, we also learned how to divide tasks efficiently and work together. What's next for InspireAR The first step would be increasing the number and variety of models to give the user more freedom with the type of space they construct. We have also thought about expanding into the VR world using products such as Google Cardboard, and other accessories. This would give the user more freedom to explore more interesting locations other than just their living room. Built With c# css django echoar html5 python swift unity xcode Try it out github.com
InspireAR
A website that encourages users to be organized by allowing them to set goals. Upon completion, they are rewarded with an object which is used in our AR companion app to build their own home office.
['Connor Burns', 'Nathan DeGoey', 'Aryan Dhar', 'SalCern']
['echoAR: Best AR/VR application using the echoAR platform']
['c#', 'css', 'django', 'echoar', 'html5', 'python', 'swift', 'unity', 'xcode']
11
9,982
https://devpost.com/software/smartcaliper
SMARTCaliper Logo SmartCaliper and Custom Blender Addon Caliper and Circuit CAD Integration Blender SmartCaliper Annotation Tool Addon Hardware Circuit Diagram 1. Inspiration The SmartCaliper is a fresh approach to caliper design disrupting a stagnant market. We designed the SmartCaliper after seeing firsthand in the industry the limitations that modern calipers have; hence we present a fresh vision for a technology that can reduce error and increase productivity for engineers and hobbyists. The SmartCaliper allows for measurement data to be digitally transferred from the caliper to the computer. Additionally, we offer a novel software package that integrates the SmartCaliper to not only analyze the part tolerances against its 3D model, but to also facilitate the computer-aided-designing (CAD) process . To demonstrate the application in industry for the SmartCaliper, one of our team members offers two first-hand experiences from an aerospace company and mech engineering company. First in aerospace, working with satellite hardware, there are a number of precision parts which all require their dimensions to be verified. During the assembly process, there was a case of one of the parts not fitting correctly. Consequently, an engineer and I began taking measurements of the part that could be causing the fit to deviate from the tolerance specification in the clean chamber. These measurements are often recorded from the caliper’s screen then written onto a piece of paper, usually a notebook. This is then brought to a computer and compared. What is bizarre about this is the use of paper since it is counter-intuitive to record measurements from a digital caliper onto a piece of paper only to then check with a digital CAD model. This creates a source of error when recording the measurements and leaves no concrete log of the measurements for later use, unless they are digitized from the notebook. In this case, SmartCalipers would have streamlined this process since it seamlessly transfers caliper data to Blender with our custom addon installed, which enables the user to annotate and record the tolerance information directly into the 3D Wavefront OBJ file itself. When the annotations need to be accessed again in the future, the Blender addon is also capable of loading in and editing them. The second example is from working with tight tolerance parts in a local mechanical engineering company. Similar to the one off components for space, large production run parts often require dimensions to be verified and fits checked for parts. There do exist numerous cases where commercial off-the-shelf (COTS) parts are purchased without drawings or with inaccurate drawings. As a result, when the engineers have to turn those parts into 3D models, they often have to measure each dimension with their calipers and manually type those dimensions into the CAD software. Despite the fact that engineers had access to high-end Mitutoyo calipers that have the capability of sending data to computers over a serial link, engineers still often do not use the data transferring feature. This is because the software that handles the data transfer and its integration with the CAD environment is unintuitive and overall clunky to use, which fails to expedite the 3D modeling process. SmartCaliper aims to resolve these existing issues by taking the form factor of an inexpensive upgrade kit that can be installed on any digital caliper. Our software integration is also extremely user-friendly since it only requires the user to press one button to send the data to the desired textbox in any CAD software. Overall, the caliper industry has not pushed to innovate on their technologies, hence having no pressure from competitors. Ultimately, the goal of the SmartCaliper is to improve the workflow of calipers in the modern digital age and push manufacturers to innovate again. 2. Market Application The caliper market has largely been able to avoid massive change for the past decade, which translates to limited compatibility with the digitally driven engineering workflow of today. If an engineer wants to directly log measurements from their caliper in a digital format, their only option is to hope they have an exposed data link connector and the required cable. They then need to wire this into their computer and use a clunky software that will allow them to log the measurements. The smart caliper serves to offer a simple alternative that is low in cost for both engineers and hobbyists, improving the overall caliper experience without attempting to reinvent the wheel. Today, almost every digital caliper has data link ports, which are enclosed within the caliper. We plan to sell conversion kits for many of the major caliper manufacturers such as Mitutoyo, Neiko, Starret, and more. This kit would consist of the circuit, shown in the technical summary, miniaturized into a small PCB along with a simple plastic case that mounts to the rear of the caliper body as well as a cable that hooks into the caliper's data port. We also offer our intuitive software package to anyone who purchases this kit. Since this solution consists of very few simple parts, it would be easy to scale production and move to market with the technology seen in this hackathon. The end product would be very affordable for less than the price of most calipers due to the elegance of the design. 3. Technical Summary 3.1 Hardware Almost every digital caliper on the market today has a connection port on the PCB that is used to stream measurement data to the computer over a special USB adapter cable. These cables are often upwards of $100 and strangely, most of those calipers do not even make the port available, opting to use it for conducting quality-control and calibration during production. The first step of the project was to acquire a suitable caliper, we chose to use one we had on hand, which was the Neiko 01407A. This caliper proved to have a connection port tucked away under the body and required a Dremel to cut a slot for wires to be attached. These 3 wires were soldered to the caliper’s GND, Clock, and Data lines. This was then run to two transistors, which handled the 1.5v to 3.3v logic level conversion from the caliper to microcontroller respectively. The microcontroller chosen was an ESP32 breakout module since it would allow for the high clock rate required to parse the data from the calipers and enable wireless data transfer. A pushbutton was also added to allow the user to choose when a measurement should be recorded without having to touch the computer. 3.2 Firmware Using an oscilloscope, the clock and data lines were probed to determine how the information was encoded. We discovered that every ~40ms, the caliper would send three bytes to the computer. Each bit was marked with the falling edge of the clock line. The first 16 bits were the measurement bits (LSB to MSB), followed by three high bits, then one bit to encode the sign, followed by four remaining high bits. An interrupt was used to detect the edge of the clock pulse and handle recording the values. An additional interrupt was used to detect the user button press and send the measurement via either serial or wirelessly, depending on how the system is configured. 3.3 Software Package To maximize the potential of the SmartCaliper, we created a software package that focuses on enhancing the two main uses of a caliper: Function 1: Measuring dimensions of a part for the purpose of creating its 3D model Function 2: Measuring dimensions of a manufactured part to compare its dimensions with its 3D model 3.3.1 Function 1: When one desires to create a 3D CAD model of a physical part, the most tedious process is probably measuring its dimensions with a caliper and then manually entering those dimensions into a CAD software. To streamline the dimensioning part of the CAD process, we created a functionality in our software package that allows the user to send the caliper measurement directly to the CAD software at the press of a button. We have also tested our program in a plethora of the most popular CAD softwares, including SOLIDWORKS, Fusion360, and CATIA. However, this program was designed to work with virtually every CAD software. This program is accomplished using the multiprocessing module of Python as the software enables parallelism for two functions. The first function is responsible for running the user-specified CAD software. The second function continuously runs a while loop that checks for data sent from the caliper using the pySerial module; when a data packet is detected, it decodes the data and enters the correct caliper measurement into a textbox that the user clicks on in the CAD software. The program enters the measurement into the CAD software by using the pynput module, which mimics specified keystrokes. 3.3.2 Function 2: After a part is manufactured, it is common practice to use a caliper to measure the dimensions of that part to determine its tolerance and errors introduced during the manufacturing process. In order to determine the part’s tolerance, it is required to compare the manufactured part with its original 3D CAD model. To make this process faster and more intuitive, our software package includes a functionality that enables the user to input the measured dimensions and make any annotations directly onto the 3D CAD model. The user is able to view the original 3D CAD model in our software, then click on any two vertices/faces of the 3D model to generate a dimension the user wants to comment on. The software then stores any user-inputted comments, computes and logs the intended original dimension, and prompts the user to enter the part’s actual measured dimension. Similar to Function 1, the user simply needs to click one button on the SmartCaliper to transfer the measure dimension into the software without the need to type. The user can make as many of these annotations as they desire and all of that information is saved directly as comments in the 3D CAD model. By doing so, the user can use the software to load the 3D CAD model along with all of the saved annotations if the user wants to edit or append new annotations in the future. To accomplish this, we used Blender as the main viewport environment. This is because Blender is an open source CAD software that has a lot of documentation for custom AddOn development. Its native support of Python, through the Blender 2.83.0 Python API, allowed us to integrate the program developed in Function 1 into our codebase. The code contains three files: an initialization file, a panel class file, and an operator class file. The initialization file is responsible for instantiating instances of all the classes. The panel class file is responsible for the class definitions of all the custom GUI that our AddOn implements. The operator class file contains the functionality to actually execute all the desired actions (e.g. measuring the Euclidean distance between two selected vertices/faces, loading/saving annotations to the header of the .obj file, etc.). 4. Next Steps for SmartCaliper There are numerous hardware and software improvements we have in mind for the next stage of this project. 4.1 Hardware: The current SmartCaliper circuit is built on a breadboard, which is only meant for prototyping purposes. We are planning to condense the current breadboard circuit into a PCB that can be mounted to the rear of calipers. This can then be sold as a kit for digital calipers that will allow people to modify their existing calipers easily. The SmartCaliper currently uses a micro USB wire to transmit the measurement data via serial. However, our original intent for using an ESP32 module was to be able to transmit the data wirelessly via WiFi, but we simply did not have time to implement that during the hackathon. Therefore, we plan to enable wireless communication between the computer and the caliper. 4.2 Software: An immediate next step would be to add a GUI for the launch menu to enhance the user experience. We could also convert all the Python files into one executable file. An interesting idea we want to explore is add a “fix” feature that corrects the 3D model in real time based on the errors between the theoretical dimension and the measured dimension. In Blender, the user can very easily manipulate the mesh, which means we could automate that process with a click of a button. We want to implement an ML online learning algorithm that allows the user to feed the error percentage value into the model every time the user adds an annotation. Overtime, the online learning algorithm will establish a more and more accurate model that predicts the errors before the part is manufactured. This model will also be catered towards each user since it will only be receiving data from each user alone. Since each user’s manufacturing environment will be different (for instance, different users have different quality of 3D printers), it is therefore highly beneficial to have models that adapt to the user’s individual environment. When a discrepancy has been measured between the CAD model and the manufactured model, there are many ways the digital model can be fixed to account for this. For instance, the user could either just simply translate the vertex, translate the entire polygon of which the vertex is a part, or even translate several of the surrounding polygons. A Deep Neural Network could be used to learn how the user prefers to correct the CAD model in order to automate this feature as well. Recently, there have been promising results seen with using General Adversarial Networks to CAD 3D models, and this is another type of model that could be considered as well. When two vertices or faces are selected, we want to be able to change the imported obj file into X-ray mode and display a line indicating the distance being calculated. This would make the software even more user-friendly. As well, when the user selects an annotation, we want to display a coloured line on the imported 3D object to exhibit the measurement contained in the annotation. Built With arduino blender esp32 python solidworks Try it out github.com
SmartCaliper
An upgrade kit for digital calipers that sends measurement data directly to your computer, integrated into a program that allows you to annotate and check a part's tolerance against its 3D model.
['Victor Zhang', 'Ritvik Singh', 'Ethan Childerhose', 'Jingzhou Liu']
['1517 Fund: Most Novel Hack']
['arduino', 'blender', 'esp32', 'python', 'solidworks']
12
9,982
https://devpost.com/software/protest-rpa
Inspiration With the numerous social justice issues being brought to light in the past while, demonstrations and protests have swept the world for reform. We as a team wanted to address this issue. Also, we have seen UIPath as a sponsor for a few hackathons now and wanted to try it out in a project. What it does Protest RPA scrapes the web for protest information and dates from articles, effortlessly converts raw data written by people into a machine-friendly format, and uploads the newly created file directly to Google Drive - all without human help. How we built it Protest RPA is heavily based on the UIPath software for scraping data, automating connections, exporting, and uploading to Google Calendar without human intervention. The bulk of the data processing is done through Excel queries and formulas to convert unformatted text to a computer friendly file. Challenges we ran into We both had no prior experience with UIPath or other RPAs. It was both fun and challenging to work on a project that was entirely graphical with no code. Accomplishments that we're proud of We learned a new technology and got everything polished and working smoothly. What we learned We learned how to use RPA software, something that is highly applicable to other projects and automation tasks in general. What's next for Protest RPA Next we want to work on deployment. How can we get this tool in the hands of others? Built With excel google-calendar uipath
Protest RPA
Automate Protesting for Social Justice
['Matthews Ma', 'Samer Rustum']
['MLH: Best UiPath Automation Hack']
['excel', 'google-calendar', 'uipath']
13
9,982
https://devpost.com/software/techteach
Homepage Student View Teachers View and Data analytics Inspiration I wanted to improve the virtual learning experience. Working from home and online education is going to be the new norm, and yet we are left with basic video calls and ironically that leave us more disconnected from the content we are learning. I was wondering why it took me two hours to finish a 1 hour online lecture, but I and thousands of others could watch Twitch streamers for several hours no problem. I believe the problem is lack of engagement. Watching streamers on twitch is fun because viewers can actively react and engage with the streamer through chat. At the same time, the streamer is able to judge how well their content is doing based on how their chat is reacting. What it does TechTeach is a platform for students to easily provide live feedback during live and pre-recorded lectures. This is done through time stamped reactions where a student can let the professor know exactly when they don't understand a topic, when they do, and more. The data is saved to a database, and presented to the teacher so they can better understand their audience and improve the overall online education experience. How I built it I created the front end in html, bootstrap and js. I used the youtube api and django to design the timestamps and backend logic. I also used plotly for the embedded data visualization. Challenges I ran into Working with django (it was my first time) and trying to get google cloud NLP to add in a smart q&a feature (ran out of time for that). Accomplishments that I'm proud of Watching a 4 hour django tutorial at 2x speed and understanding at least 40% of it. What I learned How to use django. The ups and downs of working independently, and using multiple languages together. What's next for TechTeach Adding NLP to remove repeated questions and linking answers to time stamped questions back to the students for easy viewing! Built With bootstrap django html5 javascript plotly python youtube Try it out github.com
TechTeach
Twitch for Education
['Indika WIjesundera']
['MLH: Best Domain Registered with Domain.com']
['bootstrap', 'django', 'html5', 'javascript', 'plotly', 'python', 'youtube']
14
9,982
https://devpost.com/software/omakase-r6oipu
Inside the app you can see the recommended recipes and their instructions Taking a picture inside the app The progression of the algorithm to detect the food inside the users' fridge This shows the progression of food detection from the algorithm, the ingredients that were recognized, and recipes found and suggested Omakase "I'll leave it up to you" Inspiration On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how? What It Does We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients. What We Learned Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more. How We Built It We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP. What We Are Proud Of We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa. Challenges You Faced Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult. Whats Next We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods. Built With android flask google-vision heroku java numpy pandas python Try it out github.com
Omakase: Smartify Your Eating Experience
No more standing at the fridge wondering if you should just eat another sad sandwich. With Omakase, you can find amazing recipes that you can make with the ingredients in your fridge!
['Ben Natra', 'Matthew Leung', 'Sean Wu', 'Arsh Kadakia']
['MLH: Best use of Google Cloud']
['android', 'flask', 'google-vision', 'heroku', 'java', 'numpy', 'pandas', 'python']
15
9,982
https://devpost.com/software/worldclass-o6zpeh
Home Screen - World Lesson Splash Screen Sample Question Results Page Prize Popup User Persona User Flow Story behind the project When COVID-19 cancelled in-person classes and learning was forced to transition online, students were faced with many learning challenges including decreased support from teachers, lack of daily structure, and general boredom. With these issues on the forefront of remote education, we created worldclass to help make the transition to remote learning for young kids easy, fun, and accessible. What is 🌎worldclass Worldclass is an online educational web platform that strives to provide an engaging and seamless learning experience for young students. By gamifying the eLearning experience, students can complete lessons to earn accessories for their “World”, allowing them to add fixtures like playgrounds, sandboxes, swings, and other things you can find in a school yard. Teachers can input their questions into a database which will get output through a series of mini-quizzes. The students will then take these mini-quizzes to not only test their knowledge but to also earn prizes for their person world. We wanted to promote continuous learning so if a student wants to improve on their result from a quiz they have the option to re-do them. How we built it We built our database using PostgreSQL. This includes tables for each of the students, teachers, lessons, questions, and answers. We then connected our database to Autocode. There, we created all of our endpoints in NodeJS. We built all of our front-end on ReactJS which leverages the different endpoints created in Autocode. These components were styled using HTML and CSS based on our designs from Figma. Challenges we ran into Tech Challenges: learning how to make endpoints on Autocode connecting Autocode with PostgreSQL Design Challenges: due to time constraints, not being able to implement all designs envisioned in development Accomplishments that we're proud of One of the aspects we are most proud of in our product is the interface/visual design and user experience. We’re also happy to have not only been introduced to Autocode in this hackathon, but also incorporated it into our hack. For some members, it was their first time coding in React and was a fun learning experience. Lastly, the way our team worked together and helped each other out in times of need is something we're all proud of. What's next for 🌎 worldclass We see potential in 🌎worldclass to be on the forefront of ed-tech for younger audiences. Our hope for 🌎worldclass is to expand the platform for a wider audience. The timeframe of the hackathon had us prioritize specific goals in our project, but our vision for 🌎worldclass is to expand the interactivity on the students side for the lessons. We also hope to include more student-to-student interaction. Our research showed that student-to-student interaction is the thing kids missed the most from remote learning, so giving the students the ability to visit their classmates’s Worlds is something we hope to include. Accessibility is of huge importance to us which we've wanted to implement since the early stages of our project. In the future we would love to incorporate customizable features/filters for the application to aid with accessibility, such as in font choice, size, colour use, and use of audio/text-to-speech. Github Repo: https://github.com/viet-quocnguyen/ht6ix Built With autocode figma node.js postgresql react Try it out www.figma.com github.com world-class.netlify.app
worldclass
Increasing accessibility to remote learning for young students 🌎
['Benjamin Ginzberg', 'Kelly Chong', 'Alison Wong', 'Viet Nguyen']
['Best Design']
['autocode', 'figma', 'node.js', 'postgresql', 'react']
16
9,982
https://devpost.com/software/blockoj-qnpcb7
BlockOJ Boundless creativity. What is BlockOJ? BlockOJ is an online judge built around Google's Blockly library that teaches children how to code. The library allows us to implement a code editor which lets the user program with various blocks (function blocks, variable blocks, etc.). On BlockOJ, users can sign up and use our lego-like code editor to solve instructive programming challenges! Solutions can be verified by pitting them against numerous test cases hidden in our servers :) -- simply click the "submit" button and we'll take care of the rest. Our lightning fast judge, painstakingly written in C, will provide instantaneous feedback on the correctness of your solution (ie. how many of the test cases did your program evaluate correctly?). Inspiration and Design Motivation Back in late June, our team came across the article announcing the " new Ontario elementary math curriculum to include coding starting in Grade 1 ." During Hack The 6ix, we wanted to build a practical application that can aid our hard working elementary school teachers deliver the coding aspect of this new curriculum. We wanted a tool that was Intuitive to use, Instructive, and most important of all Engaging Using the Blockly library, we were able to use a code editor which resembles building with LEGO: the block-by-block assembly process is procedural and children can easily look at the big picture of programming by looking at how the blocks interlock with each other. Our programming challenges aim to gameify learning, making it less intimidating and more appealing to younger audiences. Not only will children using BlockOJ learn by doing , but they will also slowly accumulate basic programming know-how through our carefully designed sequence of problems. Finally, not all our problems are easy. Some are hard (in fact, the problem in our demo is extremely difficult for elementary students). In our opinion, it is beneficial to mix in one or two difficult challenges in problemsets, for they give children the opportunity to gain valuable problem solving experience. Difficult problems also pave room for students to engage with teachers. Solutions are saved so children can easily come back to a difficult problem after they gain more experience. How we built it Here's the tl;dr version. AWS EC2 PostgreSQL NodeJS Express C Pug SASS JavaScript We used a link shortener for our "Try it out" link because DevPost doesn't like URLs with ports. Built With amazon-ec2 amazon-web-services blockly c ec2 express.js javascript node.js postgresql pug sass Try it out is.gd github.com
BlockOJ
Interactive, drag-and-drop based online judge that teaches children how to code and solve problems.
['Ruyi Li', 'Freeman Cheng']
["Mentors' Choice"]
['amazon-ec2', 'amazon-web-services', 'blockly', 'c', 'ec2', 'express.js', 'javascript', 'node.js', 'postgresql', 'pug', 'sass']
17
9,982
https://devpost.com/software/rnjogger
The RNJogger logo The home screen The friends tab The step count leaderboard The general map screen Inspiration We all need exercise. Or at least some time outside of the house every day. We found that it's hard to do this on a regular basis because of just how boring it can get. We found that jogging, or even walking a dog, is much more exciting when I we get to explore new places. But after walking down what feels like every road on the block, runs started to get boring. It feels like choosing between a few of the same paths every time. What it does RNJogger makes running more enjoyable by making every trip a unique and exciting experience. By giving it a distance, how far you want to run or walk, it determines a random route for you to run or walk around your neighbourhood. This keeps things exciting and provides motivation to go outside by taking the monotony out of the task. When you approach an intersection, the app will use audio cues to tell you to turn or proceed on your current path. Simply type in your distance, plug in your headphones, and run! How we built it RNJogger was built using React Native, Mapbox, and OpenStreetMap map data. We used React Native to create navigation and a UI for the app. Mapbox is used to display the map and render the calculated route as an additional layer on top. When the user generates a route, the app first gets the user's location and downloads OpenStreetMap data of the map around the user, which gets parsed and processed by a custom algorithm. Challenges we ran into We were faced with the challenge of defining an "interesting route". After a lot of deliberation, we decided that an interesting route had the following properties. Does not backtrack Does not cross the same node more than twice Utilises many turns Crosses landmarks Most of our programming trouble stemmed from drawing the map: we had trouble accessing free map tilesets, downloading tiles for offline usage, and rendering our path on our map. Accomplishments that we are proud of We're proud of creating a unique pathfinding algorithm that is able to work with OpenStreetMap. Finding paths of k length in a weighted graph is known to be computationally expensive, but finding paths of approximately k length is an interesting challenge. We're also proud of putting together a fully functional app in such a short period of time, especially considering we've never used React Native. What I learned We learned a lot about how React Native works, especially the Native Modules API which allowed to run the native Java code that allowed us to get the location and process the OpenStreetMap data. We learned how XML data can be parsed in Java, as well as enough about the Android API to generate log messages to debug our code. We also needed to use Android Studio to create virtual Android devices and connect our phones to test our app on physical devices. What's next for RNJogger If RNJogger attracts enough users, it may be worthwhile for us to include non-intrusive ads to allow us to make a profit. We plan to release this app on the Google Play Store, and if we find it picking up enough traction there, we might also release it on the App Store, for which one must pay to get a license. Built With javascript nativebase openstreetmap react-native Try it out github.com
RNJogger
A mobile app that randomizes your run path.
['Harit Kapadia', 'David Jacewicz', 'Julia Chen']
[]
['javascript', 'nativebase', 'openstreetmap', 'react-native']
18
9,982
https://devpost.com/software/study-with-me-a24zsp
studynotes Profiles Markdown Viewer Activity Page Users Visit studynotes Motivation With the recent shift to remote learning, students around the world are faced with a common problem: without friends and faculty around, it becomes hard to find the motivation to study. studynotes is a web app that helps to promote a social aspect to studying whilst acting as an easy collaboration tool for sharing notes. Our goal is to create a social network for studying, where users are able to collaborate with others, share their notes, and view what their friends have been up to. About There are three components to this app: the website, the API, and the desktop client. Web Client: Our main app is a web app where users can create an account, view other users' activity, and view your friends' notes. It's written in JavaScript using React and deployed on AWS using Amplify . We decided to use a simplistic design for the web page to reduce the barriers that users will face and ease user onboarding. It is through this web client that users will be able to follow and interact with each other, establishing an encouraging atmosphere for learning. Desktop Client: Using the desktop client, users are able to select their notes folder on their computer and automatically have their notes synced to our servers. This is a Python application with native-level support for Windows, macOS, and Linux. While running in the background, the client monitors the chosen directory, automatically detecting file changes and sending updates to our API. All the user has to do is save their files locally and their work will be automatically synced with our servers. API: This is the central point of the entire stack. It connects everything together - the desktop client, the MongoDB database, and the web client. It's written in Python using Flask and PyMODM using a model-controller architecture and is deployed on AWS ECS as a Docker container for scalability. The service is backed by a load balancer and will automatically scale up and down depending on traffic. This is a fully fledged API complete with login and user authentication, CORS configurations, and a secure HTTPS connection. Challenges we ran into One of the major issues we ran into was with CORS. Because we implemented our own login system, passing cookies over remote servers was a big pain and we spent a lot of time troubleshooting cross-origin policies. We also ran into a couple of challenges with optimizing queries to MongoDB and preventing re-renders on the React side. Since we had a lot of data to work with, processing and retrieving data took up to eight seconds at first, which made the service completely unusable. To get around this, we implemented a caching laying on the API which greatly reduced the number of database queries, helping us get the load time to just over one second. Accomplishments that we're proud of Firstly, we are extremely proud of our final product! It is especially fulfilling to have created a functioning application that is fully-deployed and not dependent on hard-coded data of any sort. We are also proud of how we picked up on many new skills over the course of this project, from working with low-level OS APIs for the client to learning how to use MongoDB. Most importantly though, we're proud of how well we worked together as a team, especially since this was our first times working together virtually rather than in-person. It's a weekend we'll remember for sure! What we learned How to build APIs using Python and Flask, with user authentication How to use functional components and hooks in React How to use Docker How to deploy an application for scalability What's next for studynotes In the future, we hope to see our idea implemented by online learning and collaboration companies and turn it into a widely-used social media platform for students and workers across the world to share, collaborate, and better manage their work. Additional features include collaborative editing directly on the platform, direct uploads, more support for different file types, a chatting functionality, and group folders to organize notes for a particular subject or course. We also hope to integrate the app with existing platforms such as Google Drive and OneDrive and support additional files formats such as Word and LaTeX. Built With amazon-web-services docker flask javascript mongodb python react Try it out www.studynotes.space github.com github.com github.com
studynotes
The social network for studying
['Molly Yu', 'Michael Pu', 'Anthony Chang', 'Kelvin Zhang']
[]
['amazon-web-services', 'docker', 'flask', 'javascript', 'mongodb', 'python', 'react']
19
9,982
https://devpost.com/software/scholarcruise
Welcome page! Ahoy, do you want to see a tutorial? List view of scholarships Swipe view of scholarships Saved scholarships Filters to look for specific scholarships Calendar to manage due dates Export saved scholarships for additional management Profile to customize scholarship results and get weekly recommended scholarships! Inspiration While there are a plethora of existing scholarship websites that present people with ongoing scholarship applications, they often feel cluttered and require users to create their own excel files/calendar entries to keep track of due dates. Since these are also websites, any reminder notifications would be sent through email and likely get lost, leading most applications to be incomplete. Our goal with ScholarCruise is to create a simple and fun app that makes the scholarship search and application process a bit easier. What it does ScholarCruise is an app that matches and presents users with scholarships they're eligible for. After creating a profile with some basic information (education, program, etc.), ScholarCruise would scholar-ship the users with qualifying scholarships to which they can choose whether or not they wish to save for later. There is also a fun and intuitive feature that allows users to swipe on the scholarships similar to apps such as Tinder and Bumble. Finally, since it is a mobile app, notifications will reach the applicants in a quicker and more effective way because email notifications are simply not enough. How we built it The prototype was built through Figma and we began coding some of it on Android Studio with Java. Challenges we ran into Since it was our first time working with both Figma and Android Studio, we came across many challenges but some of the greater ones include creating animations in Figma, working with overlays and text box functions in Figma and learning to combine separate files on Android Studio. Accomplishments that we're proud of Creating animations in Figma, implementing select and scroll options in the prototype and being able to change the page with the click of a button in Android Studio. What we learned Since this was our first time working with Figma and Android studio, there were many functions we learned about in each respective application. What's next for ScholarCruise Complete the real app! We only got to finish a prototype with the design laid out but we hope to someday complete the full android app with APIs that can actually help people sail through the search and application process. We also hope to implement more features that will eventually help users apply for scholarships through the app. An example of this would be a common questions section so that it can be autofilled if multiple applications ask the same type of question. Built With android-studio figma java Try it out www.figma.com github.com
ScholarCruise
A simple and fun way for Canadian students to find eligible scholarships. You'll be *cruising* through them!
['Ivy Ding', 'Tammy Zeng', 'Nancy Situ', 'Nancy Wu']
[]
['android-studio', 'figma', 'java']
20
9,982
https://devpost.com/software/bunkr-xfoiku
Logo Assets Screen (Empty) What it does Bunkr is an application to track all of your assets, such as property and vehicles, and provides a seamless customer experience when dealing with property and casualty insurance. Through Bunkr, whenever needed, all of your important documents and records of your valuables is right at your hand. Whether you need to file a claim or determine an appropriate policy coverage plan, Bunkr makes sure that you don't have to worry about the small, tedious things. How I built it Bunkr is primarily built with React for the front end, and Firebase along with Python for the back end. What's next for Bunkr Bunkr is currently a web application but can definitely be expanded into a mobile app as well. Additionally, there is opportunity to achieve greater integration with existing insurance company systems, by using templates for claim filing and analysis of policies. Furthermore, data and Machine Learning may be implemented to provide additional diagnostics for the user, whether in the form of a risk analysis or asset valuation. Built With react Try it out github.com
Bunkr
Protect your assets and easily claim insurance
['Aarish Adeel', 'Bowen Zhu', 'Matthew Tam']
[]
['react']
21
9,982
https://devpost.com/software/sanitizerwristbandapp
Hygieia Prototype CAD Model of Hygieia Hygieia IPhone App Hygieia Android App Hygieia in Action https://youtu.be/ZB2OJvwK_Eg Hygieia Hygieia is a hand sanitizing wrist band. This device provides hand sanitizer easily with a flick of the wrist and keeps track of how many times you sanitize your hands with our app. What made you want to build this hack? Over the past few months, the world has shifted to combat COVID-19. Individuals across the globe have come together by staying apart. When exposure is necessary and as we reopen society, hand washing remains one of the best ways to avoid getting sick. We built the hygieia hand sanitizing wristband to incentivize hand cleanliness. People are returning to their normal lives but we’ve created this hack to prevent a resurgence in the number of COVID-19 cases. What was the most challenging part of the hack? The most challenging part of this hack was integrating each part of the device together, including the software, firmware, hardware, and mechanical aspects of the device. We needed to constantly communicate with each other to ensure that our work would easily fit together in the end and that we would not create parts that conflicted with each other. On the mechanical side, we created a CAD model of our casing. On the hardware side, we wired and programmed an ESP32 to detect readings from an accelerometer and interfaced with a servo motor to squeeze out the hand sanitizer. On the software side, we communicated with the ESP32 through bluetooth and made an app to keep tracking of metrics from the device. What is your tech stack? Mechanical Solidworks 3D Printer Hardware ESP32 MPU 6050 Accelerometer/Gyroscope/Temperature Sensor Servo Motor Software React Native Javascript Future Improvements Decrease size and weight of wristband User login for the app Vibration reminders to sanitize Built With 3dprinting esp32 java javascript mpu6050 objective-c react-native ruby servo solidworks starlark Try it out github.com
Hygieia - Hand Sanitizing Wrist Watch
Hygieia is a hand sanitizing wrist watch. This device provides hand sanitizer easily with a flick of the wrist and keeps track of how many times you sanitize your hands with our app.
['Minqian Lu', 'Christopher Samra', 'Avery Chiu', 'Reuben Walker']
[]
['3dprinting', 'esp32', 'java', 'javascript', 'mpu6050', 'objective-c', 'react-native', 'ruby', 'servo', 'solidworks', 'starlark']
22
9,982
https://devpost.com/software/gitbounty-ye4otp
Analytics Ganache Our Youtube Video is still processing, but we managed to get the Youtube Link, it will load in a couple of minutes Inspiration The inspiration for this project is that we wanted to tackle a problem of scale, which had to do with social good. In those case, we have identified the need for open-source sustainability, which is definitely an important question regarding the viability of open source. Because of this, we were inspired to use our knowledge of decentralized systems to create a solution that would be transparent, effective, and quality. What it does GitBounty is a crowdsourced, decentralized bug & feature bounty platform that provides incentives for people to contribute to open source projects by awarding them ether. This process is very simple, a person creates a bounty on our platform by linking their corresponding issue, and other metadata and soon other people on the platform see those bounties and choose to accept them. After the issue is closed, which is detected by the Github API, the person who raised the issue then submits a report on our platform that requires the client to input how much each person did depending on how much code they contributed, helping you out in any way. After the report has been submitted the money will be distributed amongst the contributors and they can feel proud that they made some money today by fixing an issue. Now you may be thinking, that’s a very simple explanation and what’s the use of blockchain? Blockchain allows us to decentralize these bounties essentially replacing the middlemain with a smart contract and securing all the issues within the Ethereum Blockchain Ecosystem. The requirements for a user on the platform is simply a Metamask wallet account, and a github account. Then they can proceed on creating a bounty, resolving a bounty, submitting a report, etc. The actual process of each of these steps can be seen more in detail in the demo video above. There are also analytics that are being recorded on the application such as how much you have earned, how many times you have contributed, etc. This can provide feedback on the client and ask themselves the simple question on how to help people in Github. Lastly, why would people pay people for solving certain issues. If someone raises a popular issue the chances are that it is raised for someone else is really high. This can allow popular companies or organizations to increase or donate a bounty to increase the odds of someone contributing to it. How We built it The whole structure of the application is simple yet complex. We had to utilize several libraries, systems, and also a lot of critical thinking to solve this problem. The main backend of our application was written in NodeJS and we used Express to manage the routes. There were a lot of endpoints that we created and you could split them off into two groups: Endpoints that worked with IPFS to store data, and endpoints that used the Github API to retrieve information. This was the first time that our team developed using the Github API and it was very user friendly and allowed us to retrieve the information we desired in a meaningful fashion. To go more in depth the specific libraries we used in the backend were: ipfs-api, axios, ethers, node-fetch, octokit. These backend libraries gave us enough information to build our backend algorithm that performed all the IPFS and Blockchain functions. We created our own database with IPFS and updated, read, and created documents for the users of our application in a decentralised manner. Now to the core of the program: the smart contract. We used Solidity to write the Smart Contract and Remix to develop and debug it. EthersJS is a NPM library that helps call the Smart Contract functions from either the backend or the frontend. We specifically utilised it in both, the backend for setting the IPFS database and the frontend for doing the transactions from signer to recipient. And now the actual blockchain software that we used was Ganache, a local Ethereum Blockchain that is created for us and allows us to deploy our smart contract. The console that is dedicated to Ganache is truffle, and that’s where we ran commands to deploy the contract. Onto the frontend aspect of the application, we used Metamask to inject Web3 into our browser and allow the client to retrieve the Smart Wallet of any user on the application. Web3 was used for Metamask on the frontend and linking accounts, and EthersJS was used in the backend or frontend to call the Smart Contract functions. If we look at the source code of the backend API you can specifically see when and where we call the ethers functions. After we received all the important information from both the API that we had constructed and Metamask, the frontend was ready to get built. We used ReactJS as the framework to create the frontend along with AntDesign to design the components along with CSS to style our application. We used Big-Integer on the frontend to convert the user input of Ether into Wei and then send the transaction with EthersJS. We also utilized ReactVis to create analytics for our application, these analytics consisted of charts that reflected our users' decisions on the app. Craco was also used to create the react app so we could configure it to our desired needs. The design and codebase for the React was done by Daniel Yu, who implemented a dark theme that reflected the theme of Github itself. To conclude, this application had a heavy tech stack and required a lot of teamwork for it all to work. Challenges We ran into Throughout HackThe6ix my team and I had faced problems that we had never seen before. Let's start off with the technical issues that our group had 0 control about. For this hackathon, our team decided to use VSCode Liveshare with one person frequently committed to code into Github just so we can gradually save the code in case something scary happens, and something scary does happen. All the IPFS endpoints that would manage the user base and keep track of all the important user information along with the bounties were working perfectly on Aditya’s machine, but once we converted the files onto Markos’ VSCode Liveshare, it became buggy. We were getting an Axios error that did not look like it was a fault in the code because we had recently run it and it was working perfectly. After searching for what the actual problem was, we realised it was a network error and Markos’ wifi was bugging out. It was well past midnight and we were stuck because of an issue that we ultimately did not have any power to stop. After waiting for 1-2 hours, the issue subsided and we continued the development process. We will definitely remember this moment on how it created a major roadblock in our hackathon, but we persevered and understood that not everything will work perfectly all times of the day. The second issue that was a code issue, that we spent a lot of time working on was sending the transaction to an account using a signer on the frontend. All we needed to do was send a value to a specific address and we would be done with that major component of the application. The problem is, we need to send the money in Wei, not Ether. Wei to Ether, is like Cents to a Dollar. 1 QUINTILLION Wei is equivalent to 1 single Eth, and we cannot have a number in Javascript that has 18 zeros. So instead our team researched on how to utilize the Big-Integer library in react and spent some time debugging the issue and eventually we fixed the problem by resending it as a hexadecimal value for the smart contract to decode. Last but definitely not least, was the whole integration of the Github API, IPFS Endpoints, Smart Contract, and React into one whole application. The Smart Contract and IPFS Endpoints worked concurrently whenever a certain operation was completed. As for the Github API, we had never used it before, but quickly realized how user-friendly it is. Slowly but surely we transferred our code and created our huge API with all it’s beautiful endpoints, and it really looked complete. The last piece to the puzzle was calling the endpoints from the frontend and making it look beautiful. We have to give all the credit to Daniel Yu, who designed and coded the frontend of the application and overall made it aesthetically pleasing for the client to utilize. To conclude, HackThe6ix allowed us to learn more about Github, Big-Integers, and how bad things can happen to good people. Not everything happens the way it should but we definitely learned how valuable 2 hours of time is, and that block of wifi really restricted our hacking. Nonetheless we are proud that we overcame most of these challenges and resulted with this application. Accomplishments that We are proud of My group and I are extremely proud of ending this hackathon with a finished product, even though we had many large hurdles along the way. For example, we had a major roadblock that didn’t allow us to code anything but still we finished with the code we have in front of us now. We are also proud of integrating the Github API even though none of us have ever used it before. As 3 highschoolers and 1 university student we can take a positive look back on what we made and be extremely satisfied that we even got here. What We learned his part will become short and concise because I want to get straight to the point. We learned that many uncontrollable factors can affect your hacking experience to a great fold. These uncontrollable factors greatly affected us and taught us a lesson when we realised that one of our team members’ wifi was lost and we used VSCode Liveshare. This would pause all the work that everyone else was doing and essentially stop our hacking experience. We also learned that integrating code isn’t that difficult if you communicate well with your teammates in a calm fashion. This time we all brought our code together and sat in a call for 1 hour integrating all the code to the best of our ability and making sure it worked the way we wanted it to. What's next for GitBounty The next steps for GitBounty is to publish it on the Main Ethereum Network or Ropsten Test Network. This would allow people to actually test it on real Github repos and we can witness the magic work. There is so much more for GitBounty and we could only optimize it to such an extent because we were coding this at a hackathon. To conclude, we would love to deploy and watch the Ethereum exchange throughout members and see the magic unfold. Built With antd axios css3 ethereum ethers.js express.js fetch ganache ipfs ipfs-db javascript metamask node.js postman react react-vis solidity truffle web3.js Try it out github.com
GitBounty
Crowdsourced, decentralized bug & feature bounty platform that provides incentives to open source projects that benefit millions of people.
['Borna Sadeghi', 'Aditya Keerthi', 'Daniel Yu']
[]
['antd', 'axios', 'css3', 'ethereum', 'ethers.js', 'express.js', 'fetch', 'ganache', 'ipfs', 'ipfs-db', 'javascript', 'metamask', 'node.js', 'postman', 'react', 'react-vis', 'solidity', 'truffle', 'web3.js']
23
9,982
https://devpost.com/software/captain-efficient
As shown in slack Post-processing output Inspiration Onboarding new employees so that they are up to speed on the project is often tedious due to the fact that you may not remember all the information and thus have to scroll through Slack messages to remember what has been discussed and decided. You want to send your new employee a list of those decision (on the coding language you decided and tutorial to get started). You may also want to give a summary of those decision or keywords that show up in those decisions, to help the new employee figure out where they would like to get started from. Similarly, when working at home, Slack becomes not just a work communication tool but also way to connect and get to know your co-workers. Thus, sometimes they may share interesting stories or links to Slack. But those tend to get buried as the number of messages increase. Thus, you may sometimes want to generate a list of those stories and links to see what your fellow co-workers have sent. You may also want to have a summary and keywords generated from those stories to read the ones you are most interested in first. Another inspiration was for writing weekly work reports. Since most communication are currently online due to the pandemic, what you are doing and have done this week are recorded through your conversations with various members of the team. So by going through your conversations, you can more easily figure out what you did that week to put it in your weekly work reports. What it does The project is mostly made up of three main parts, extracting messages from slack and putting it on a spreadsheet (using Autocode), processing that spreadsheet and generating summaries and keywords for each of the messages, and sending an email with the spreadsheet. We are extracting the messages from slack and putting them on a spreadsheet. We using the slack command '/cmd WorkReport week' to trigger the endpoint. The code will then clear anything that's currently on the worksheet. It will then find all the channels, loop through them finding all the messages not generated by a bot and insert them into the spreadsheet. We then process the contents of the spreadsheet. First, we identify messages with the key commands (like !Onboarding!). For those messages, we will summarize the contents using the NLTK library to help the readers get an understanding of the embedded links without having to click it. We also generate keywords from the messages to help employees navigate the messages. Finishing processing, we summarize our results in a second spreadsheet of the same workbook. Finally, we use Autocode to send an email the intended user through a slack command '/cmd SendOnboard [email protected] '. This will trigger the code to send a email with the link to the spreadsheet inside of it. The email is formatted with an html template. How I built it We built it with Autocode (js) for receiving the slack commands and populating the spreadsheet and sending the email. The processing of the spreadsheet was accomplished in google-colab using various Python libraries such as pandas, numpy, scikit-learn, beautifulsoup, NLKT gspread, networkx and matplotlib. Challenges I ran into Since we were unfamiliar with Autocode and javascript, it took some time to learn both how to how to use Autocode and learn javascript. We also ran into problems with Accomplishments that I'm proud of We got it to work on slack which was really cool (as shown in the demo) What I learned I learnt how to use some of the natural language processing libraries. I also learnt about slack commands and how to use Autocode. What's next for Captain Efficient We've completely implemented a pipeline for onboarding but as stated in the inspiration part, this can be applied to many different areas. Thus the next step would be to implement each of those function one by one. Built With autocode beautiful-soup google-colab google-spreadsheets gspread javascript matplotlib networkx nltk numpy pandas python scikit-learn url-summary Try it out colab.research.google.com
Captain Efficient
Tool for efficiently finding information in Slack, storing them in Google Spreadsheets and sending them to the intended person's email.
['Aditi M', 'helen9975', 'Kelly Yuan']
[]
['autocode', 'beautiful-soup', 'google-colab', 'google-spreadsheets', 'gspread', 'javascript', 'matplotlib', 'networkx', 'nltk', 'numpy', 'pandas', 'python', 'scikit-learn', 'url-summary']
24
9,982
https://devpost.com/software/aitomind
Users are greeted by this page upon opening the app Page showing a mindmap and corresponding video Inspiration With remote learning seeming to be the norm for a significant period of time many students are finding the transitions difficult. Particularly that consuming large amounts of online content through hour long videos and online textbooks isn't the most engaging or effective form of learning. We wanted to build something that helped students learn in a more interactive and efficient manner. Aiming to promote conceptual understanding rather than brute memorization What it does Aitomind (Auto+AI generated mindmaps) is web application that transcribes the speech of a video and organizes it into a mind map structure. Users upload a video of their choice and a couple minutes later a mind map containing the key concepts of the video is generated. This helps the user understand the structure of the video/lesson as well understand the relation between key ideas. Most importantly, each concept will have a timestamp so that the user can easily navigate to the part of the video where the concept was discussed. How We built it The core of Aitomand is the natural language processing algorithm that transcribes text from videos and analyzes it to create a mindmap. This was made up of several azure services including: azure text-analytics, text-to-speech as well as the azure machine learning platform to implement our own models. We used azure text-to-speech to transcribe the text form the video, then used azure text-analytics to do key word and entity analysis. From there we used our own machine learning model, which is trained on a variety of academic datasets using a word to vector model. This is all ran on an express server and written in node.js. The frontend was built using react and styled with the bulma css library Challenges I ran into While developing the word2vec model, the data mining process was especially challenging. Since there was no Natural Language Processing dataset made for academic keywords, I had to collect data from a variety of dictionary sources across multiple subjects. As well, getting used to programming in asynchronous javascript and writing a full stack application for the first time was very difficult to say the least Accomplishments that I'm proud of Implementing azure services in our project, especially our own ML model Writing a polished full stack web application for the first time What We learned How to write a full stack web application How to deploy custom models on azure Using asynchronous JavaScript and REST apis What's next for Aitomind We aim to further tune our word2vec model with more datasets to improve its accuracy on detecting related keywords. Moreover, we will look into video upload speed as well as speech-to-text transcription speed as right now it currently roughly half the length of the video. As well, we would like a database for users to store and retrieve their own mindmaps Built With azure express.js natural-language-processing node.js react Try it out github.com
Aitomind
Navigate educational videos with AI generated mindmaps
['Alex Qiu', 'Rory Gao', 'Alan Wang']
[]
['azure', 'express.js', 'natural-language-processing', 'node.js', 'react']
25
9,982
https://devpost.com/software/watercooler-w1z37x
Inspiration The water cooler. The hallmark of casual coversation. The perfect place to meet someone new, connect with a friendly coworker, and take a much-needed break your the work day. University students all around the world have been scrambling to prepare for online fall semesters due to the COVID-19 pandemic. Learning at home is challenging, with less collaboration, more distractions, less accountability, and a lack of daily structure. However, as incoming freshmen, we are most concerned about the logsitics of developing the strong peer groups required to overcome the academic challenge of first year. Though communications technology exists, interactions with complete strangers are awkward to initiate, especially virtually. So how might we replicate the environment where one can take a break from their daily grind and friendships can flourish? Watercooler aims to do just that. By creating an environment where online students can engage in random and casual conversation ‘at the water cooler’ between work sessions, Watercooler will enable students to help each other succeed academically and socially during their online semesters. What it does Watercooler is a web application hosted on firebase. Upon loading you are greeted with home page that explains more about Watercooler and how to use it. This page acts as our external marketing, and attempts to appeal to university students (especially those in first year) by stating its value proposition with three simple phrases: BUILDING COMMUNITY, PRODUCTIVITY, and ACADEMIC SUPPORT. In greater detail, it appeals to students by talking about some major pain points, or things that they may be concerned about entering first year (mental health, focusing from home, and making new friends are all at the top of the list). From the landing page, you can download the chrome extension and visit our project on devpost, as well as signup, login to our portal, and learn more about our team members. In order to get started it is important that you have an account. show signing up new person Once you have done that, we send a code to your professional email to verify. Then you can simply login show a diff login than the one just made since a new account can’t have hours done already , and it takes you to the Office Dashboard. From here there’s many things you can do, but the first thing you need to do is download our chrome extension. This extension tracks your productivity, and controls how often you are allowed to access the water cooler and interact with other students.. Once you download the extension, simply load it on the chrome extension store and viola, you’re all set up! Every time you start a new session, a timer starts to keep track of your work. If you get distracted and visit websites such as netflix, youtube, etc., the extension detects it and pauses your timer since you are not making good use of your time. Upon leaving that website, your timer will automatically continue. Once you finish your one hour of working, you will be taken to a page where you can award yourself to a Watercooler break. This is where our program matches you with someone from your school and gives you the opportunity to talk to them for 15 minutes as your way to socialize, and network, all while staying productive. If you enjoy talking to the person you are connected with, you can add them as a friend and find them in your friends list later. Once your Watercooler break of 15 minutes has ended, you can return to Office Dashboard and look at your friendlist. Here you can find anyone that you connected with in the past, and added as a friend. You can also see their status on the app, whether they are taking a break, working or busy on a call. If you find someone who is free, you can click on their profile and request to call them and further build your relationship. You can also return to the office dashboard to see your statistics. In this section you can see your hours worked this week and the week prior as well as some community stats such as ranking and where your friends are. That is a fairly comprehensive overview of Watercooler’s current features, but You can find out more about us the creators of Watercooler on the ‘Meet the Team’ page where we talked about our contributions as well as our linkedins. How we built it Our web application uses Firebase, Google Charts API, Javascript, jQuery, HTML/CSS, Socket.io, Express, and Node.js to allow people to casually connect from their homes. We created a Firebase RealTime Database in order to keep track of each user’s information, while also setting up both Firebase Authentication and Firebase Hosting in order to keep all our user information together, authenticate our users and their emails, and to successfully host our website. Socket.io, Express, and Node.js are used with an API called WebRTC (Web Real-time Connection) in order for us to be able to host video conferencing through our platform in the future as opposed to randomly generated Google Meet codes. Challenges we ran into Integrating Firebase Using both CSS Grid and Flexbox in the same document caused some weird formatting Connecting the chrome extension to the firebase Accomplishments that we're proud of Hima - I have been working with frontend programming for 2-3 months now but I had never really taken the step to explore server-side programming or backend programming. Through the course of the hackathon I was able to learn how to use Node.js, Socket.io, Express.js, and Firebase. I learned more in these few days than in the past few months. I’m happy with the substantial work that I was able to do. Joss - this is the first website I’ve built, and I learned more than I ever thought I could in one weekend! I learned how to use CSS grid, flexbox, how to integrate tons of html stuff, learned how to use figma for design and drawing icons and stuff. Preesha - I am proud of the fact that I was able to learn so much from Hima about firebase, but mainly about my first chrome extension. I've never made one before and the extension is amazing. I am also proud of the quality of the websites I was able to make. Although I've made websites before, they have never been visually appealing but the ones for Watercooler and I am so glad I mastered that skill in this hackathon. What we learned Hima - I learned the importance of perseverance and youtube tutorials. Due to the fact that I was working with technologies that I had never used, I was bound to be faced with frustration. I learned that there’s always something out there that will help me. Things like Firebase are extremely well documented so I made sure to make good use of my resources. Joss - I learned so much about web dev and that I become a better coder when I listen to Nicki Minaj and Cardi B! I used tons of technologies I hadn't used before, and finally became confident that I can teach myself anything using youtube. Preesha - I learned the usage and development process of a chrome extension as well as how hard it is to create and integrate databases. I furthermore sharpened my design skills which will definitely help me in the future. What's next for Watercooler Notifications = So the user knows when they are on a non-productive site when they are done their timer when they receive a call Pretty = Even though it looks really good, there are many design aspects that could have been better and we will be working on bettering the UI Authentication = authenticate new users using their work/school email with a certain code, to avoid stranger danger. friends feature Built With api bootstrap charts css css-grid express.js figma firebase google google-chart google-client-authentication google-cloud google-meets html javascript jquery json node.js socket.io webrtc
Watercooler - Hack the 6ix 2020
casual conversation
['Hima Sheth', 'Jocelyne Murphy', 'Preesha Ruparelia']
[]
['api', 'bootstrap', 'charts', 'css', 'css-grid', 'express.js', 'figma', 'firebase', 'google', 'google-chart', 'google-client-authentication', 'google-cloud', 'google-meets', 'html', 'javascript', 'jquery', 'json', 'node.js', 'socket.io', 'webrtc']
26
9,982
https://devpost.com/software/mealrelief-kwn46i
Inspiration We wanted to build this because there is a clear need to connect those who need assistance with those who can offer it. What it does MealRelief is more versatile and anonymous than traditional food banks. MealRelief is available as a web application to allows food providers (like restaurants and grocery stores) to donate servings of fresh, edible meals that would otherwise become food waste. Users can then search for food near their location and anonymously claim free meals. How we built it MealRelief's frontend website application was built with React. The main page’s map and pinned restaurants is implemented with the Azure Maps API. We also created a REST API for communication between the database and the frontend with Django. And finally built our database with SQLite. Challenges we ran into Connecting React events to trigger changes in the backend and update the app accordingly was a big challenge, and something to work towards implementing efficiently in future iterations of the product. User authentication was difficult to implement especially as it was both our first time working with Django and implementing user authentication. We had to rethink our database schema halfway through development as we realized our current schema did not fully suit our needs. Accomplishments that we're proud of We're really proud that we were able to connect our backend and frontend successfully! You can now sign up as a food provider and post live listings of your donations while tracking a list of claimed codes. Additionally for users, using Azure maps we were able to make our website more user-friendly by loading a list of nearby restaurants! What we learned All of us were able to work with new technologies and learn about new frameworks, libraries and APIs! In particular, we learned how to handle API requests on both the front-end and back-end, and work with new services like Azure Maps. What's next for MealRelief MealRelief is looking to implement more features with Azure maps, such as a search bar for nearby locations and their distance! Additionally, we hope to see more user and provider authentication features so that MealLife can more secure. Built With azure django mongodb react sqlite Try it out drive.google.com github.com
MealRelief
Millions of tonnes of food are wasted at the industry level each year. MealRelief helps restaurants and grocery stores donate their extra food to those who need help putting a meal on the table.
['Sydnie Chau', 'Jodie Xiang', 'Priya Jain', 'Sherry Lam']
[]
['azure', 'django', 'mongodb', 'react', 'sqlite']
27
9,982
https://devpost.com/software/watchdog-lvys8z
Inspiration Insurance companies have access to mostly negative data: (crashes, tickets, and more) leaving some drivers without the opportunity to prove themselves. We wanted to develop a win-win program to reduce distracted driving by incentivizing responsible drivers with lower insurance rates while providing insurance companies with data to improve their insight accuracy. What it does WatchDog allows drivers to self-report their focused driving while avoiding excessive intrusion. The mobile app captures a video recording of the person behind the wheel as they drive. After the trip ends, the driver can choose whether or not they want to send this video to WatchDog where it is then processed by machine learning and assigned a specific score. Drivers can view their progress overtime and improve their score before sending in their data for discounted insurance rates. How I built it The mobile app was build using React Native. Our backend used Flask and nginx for networking. Image recognition was accomplished using a CNN build with PyTorch and trained with the StateFarm distracted drivers dataset. Challenges I ran into Getting large amounts of data to be efficiently sent to the backend. Accomplishments that I'm proud of The CNN was able to achieve an accuracy of 87% with limited training. If given more time to train, the accuracy could be even better. Try it out github.com
WatchDog
Rewarding responsible driving with accurate data
['Cameron Kinsella', 'Roton647 Barroso', 'Faith Lum']
[]
[]
28
9,982
https://devpost.com/software/easytip-jpdvef
Inspiration The reason we decided to build this project is to assist minimum-wage employees, who work hard to maintain their lifestyle, especially during these times of economic hardships. People are lazy and are keen to do easy tasks, so we hoped our app would encourage more people to tip employees. What it does As the name suggests, the purpose of this app is to make the process of giving and receiving tips as simple as possible.Business owners also benefit since they will have access to feedback that the customers give to their employees. How I built it Challenges I ran into As a new team, we encountered many challenges. We used new softwares, and development environments such as the one built for php. Our project had lots of features, so we were confused at times about the merging of them. And one of the most important challenges we faced was time limitation: we could not finish the payment section of our app. Accomplishments that I'm proud of What I learned What's next for EasyTip As we mentioned, our web app is targeted to restaurants mainly for now. Further modifications can include access to various barbershop and delivery business registrations too. Additionally, due to time limits we hosted our web app on GitHub pages. For access to different features like authentication, analytics and hosting, we could modify our code for use in Firebase. Even though we created a web app for access to a wider audience, it would also be a good idea to develop a mobile app version, since it has great potential as a mobile app for easier access. We also would love to have the chance to fully integrate PayPal to our app for a finished product. Built With bootstrap css3 html5 javascript paypal php Try it out github.com
EasyTip
An app that makes giving and receiving tips as easy as possible.
['Emmanuel Ma', 'Seyed-Paya Hosseini-Jahromi', 'Firangiz Ganbarli', 'Ammar Siddiqui']
[]
['bootstrap', 'css3', 'html5', 'javascript', 'paypal', 'php']
29
9,982
https://devpost.com/software/hmso_homesocial
login info Board notification notification_NLPtransfering homeSpace_edit_setting profile_tagEdit profile_roommates profile_user signup chat homeSpace homeSpace_spaceUseSchedule status_change homeSpace_edit signup_confirm homeSpace_scheduleSet2 homeSpace_scheduleSet1 invitation homeSpace_Profile homeSpace_setup status Inspiration When staying at home in the pandemic time of Covid-19, we found the connection between roommates kind of weird. The persons that were simply sharing the living space, need to spend mostly all of the time together now, at home. And in such circumstances, more and more living frictions happen. What can we say to let the singing roommates be quiet? How could we remind them of the bill and rent properly? How could we know what roommates are interested in to share the happiness with? and etc. All of the questions came into our mind. So, we decided to design an app that can help people staying at home feel social freedom and could be supported by someone living in the same house. What it does To help people staying at home express themselves property in order to make them enjoy the group and sharing life better, we decided on the main features first: - mark users' status in an interesting way of animation, like sleep, shop, shower, housework, Laundry, work, and leisure. Thus, notify the roommates of the noise level and locations indirectly - manage the public space use in a visualized digital space, to help users construct a general idea of where the other people are and they are not alone when providing the convenient method of sharing the public space of kitchen, living room, and laundry room - transfer the text that user wants to say to their roommate to some paragraph with Emoji replaced some keywords, making the communication smooth and can be accepted easily, even you are talking about something sensitive, like sharing the bill or reminding them to pay rent and utilities - Info board, where roommates can post notification of bill, rent, and etc.; share group activities planner of housework, dinner, and shopping; some use resource of vlog, movies and news. How we built it - do research on the main mental problems people have when staying at home and some suggested solutions doctors - Decided on the pain points and solutions - design screens in Figma and begin front-end development based on that - meanwhile, back-end developer construct the database system Challenges we ran into - usually went too focused and detailed on the UI design - not decided a proper scope of work that we could finish during such short time - lack of familiarity of some framework and Accomplishments that we're proud of - find some solutions that we could use in our daily life, showing kind and compassionate to ourselves and others, to overcome the mental problems in pandemic with roommates What's next for HmSo_HomeSocial - complete the left functions of animations, profile settings, - detail the drag and resizing function home space planning Built With angular.js apache azure express.js javascript material mysql nativescipt node.js python sentiment-analysis-online typescript Try it out github.com hermiawu.github.io
HmSo_HomeSocial
Hmm.So... What can I say at home? Here is your home socializing tool, to help you live more comfortably and show your kind and compassionate to your roommates. Especially in the pandemic time.
['Hao Wu', 'Jarvis Wang']
[]
['angular.js', 'apache', 'azure', 'express.js', 'javascript', 'material', 'mysql', 'nativescipt', 'node.js', 'python', 'sentiment-analysis-online', 'typescript']
30
9,982
https://devpost.com/software/itsoveranakin
https://www.youtube.com/watch?v=oW2XISlo-Bw&feature=youtu.be Inspiration We asked our parents what their greatest struggle was when working from home, and we got a general consensus: employees often work for prolonged periods of time without taking a break, and have a tendeny to overwork. Based on personal experience, we often lose track of time when concentrating on our work, yet we get distracted easily by a notification of any sort. We thought, why not make use of this and notify the user to take a break if they have been working for too long. What it does When the user starts working, they log into a web app and a timer starts. The user will get a notification that it is running and can close the program and it will run in the background. It can be accessed through the tray. [right click on the tray as well]. Every once in a while, the app will give a notification and prompts the user to take a survey to customize the user experience and create charts. After a certain amount of time passes (deemed a reasonable limit), the app will notify the user to take a break by opening a website. This website has a meme generator to help the user relax, and some inspirational quotes to help guide them on the right path. How we built it Our web-app’s backend runs on Python Flask which is used to manage the database, as well as the log-in/sign-up system. We use Jinja2 for our template renderer to directly render information, such as name and hours worked, directly from the back end on the front end. The local component uses Electron, which is cross-platform and uses NodeJs alongside with HTML/CSS and JS. We also used many Google Cloud services like App Engine, Cloud SDK, Cloud Firestore, and Firebase Authentication in our hack. For the websites, we used chart.js to create charts and for the meme generator, we used Autocode to generate an array of gifs from Giphy, and then used Axios to transfer this data to javascript. Challenges we ran into Tracking processes was one of our main challenges. We had to find a way to get a list of all the processes running on the machine, and then remove the unnecessary ones. Accessing an array generated by an Autocode webhook along with getting useful data from a massive multidimensional array was also tough. Technologies Used Python Flask NodeJs Electron HTML/CSS/JavaScript Google Cloud Platform App Engine Cloud SDK Firebase Authentication Cloud Firestore Autocode Chart.js Axios Accomplishments that we're proud of We are proud of creating a database of desired apps, which the employer could customize, allowing the app names to be matched with computer processes. A second accomplishment was combining so many different tools and technologies into one working product. Another feat was successfully using Autocode, a tool we had 0 prior experience with. What we learned We learned to make use of the tools provided by sponsors, which made certain components much easier to create. We also realized that planning ahead and dividing up the parts allowed us to work efficiently. Our most important takeaway though is to have fun with the project without burning yourself out. Links (because try it out minifies urls) https://itsoveranakin.tech/login https://itsoveranakin.tech/download Meme generator: https://itsoveranakin.tech/break What's next for ItsOverAnakin Professional customization: Obviously our silly star wars theme will be removed in favour of a more professional theme, but still retain its main goal, to alleviate user's built-up stress. User improvement: Will track the user's usage each day and show how they improved over time. Machine learning: Get some accurate results on users’ productivity and how to increase output whilst also avoiding burnt out. Built With autocode electron flask giphy nicepage node.js python Try it out itsoveranakin.tech itsoveranakin.tech
ItsOverAnakin
An app to help employees stop overworking when working remotely.
['Andy Li', 'Daksh Malhotra', 'Aryan Abed-Esfahani']
[]
['autocode', 'electron', 'flask', 'giphy', 'nicepage', 'node.js', 'python']
31
9,982
https://devpost.com/software/say-soup
Inspiration The shortage of yeast caused by COVID-19 is finally over, and you are trying to bake a new confection. Your hands are covered in flour and cream and cheese, you can’t quite remember the next steps, and your phone screen has turned off. Not wanting to wash your hands twice in a row, you try to awkwardly turn on the screen and scroll with your elbows, and fail miserably. “I wish mom was here to just tell me what to do!” Wish no longer, because Say Soup makes cooking hands-free, at least in terms of flipping through the recipe and instructions. Now you can cook from a recipe more easily, but don’t you just miss the experience of hosting cooking parties with your friends? During COVID-19, connection is more important, but also more difficult, than ever. Not to worry though, Say Soup to the rescue with its Add a Friend feature! Overall, what motivated us to build Say Soup is the desire and need to make cooking by a recipe easier, and more interesting by enabling voice interactions and cooking with a friend remotely. What it does Say Soup is an interactive voice app cooking assistant that makes it easy to connect with and cook with others. You start by launching an app on your phone and submit a link to an online recipe. The app then processes the recipe and determines what all the steps and ingredients are so that it can begin communicating through a voice assistant. How I built it We created an app with a Node/Express back-end and React front-end for the user to link a recipe and to read it before starting to cook. It used the Spoonacular API to process the recipe data into separate lists of instructions and ingredients. We then used Voiceflow to build the voice app portion, and used its Google Sheets integration to bring in custom data for any online recipe webpage. Challenges I ran into Making Voiceflow dynamic. Since none of our team has had any experience with Voiceflow, we started off by making the flow for a single hard coded recipe. As we learned more about it, we learned that you could use Google Sheets to make “arrays”. We populated the Google Sheet from the app and read the data in Voiceflow, where we “iterated” through the array with a Voiceflow counter using conditionals and sets. We also learned how to make API calls and parse json strings to obtain specific column values on Google Sheets. Accomplishments that I'm proud of We’re proud of building something together in such a short amount of time that works! Each of us had a very different skill set on our team, but we came up with an idea that we all wanted to work on. Most importantly though, we had a lot of fun and got to learn something new. Built With express.js figma google-cloud node.js react spoonacular-api voiceflow Try it out github.com
Say Soup
A social and interactive voice app cooking assistant.
['jessicazhang236', 'Estelle Chung', 'Selina Hsu']
[]
['express.js', 'figma', 'google-cloud', 'node.js', 'react', 'spoonacular-api', 'voiceflow']
32
9,982
https://devpost.com/software/flock-mhzpfd
Flock! Home screen lobby Create a room form Shown here: 4 people in a video call part 1 Show here: 4 people in a video call part 2 Inspiration With the increasing use of digital communication being integrated into our day to day lives, there comes an ergonomic risk from factors such as poor posture, eye strain, and poor physical health. As our group was bouncing one idea after another off each other in a video call, we realized how much the pandemic has impacted the world on a digital level. In fact, throughout the hackathon, many of us are guilty of taking little to no breaks, grinding out our vision to every detail. Additionally, prolonged exposure to digital devices may lead to burnout in video software applications and has been one of the negative environmental factors many of us had to adapt to in light of COVID-19. Our team set out to come up with an innovative solution that prompts users to take breaks while using video software; solutions that involve more than just willpower. Ultimately, this led to the creation of Flock . What it does Flock is a real-world implementation of a video platform inspired by the Pomodoro technique that is used for more effective studying and work habits. We have programmed the web application to accommodate each group’s preference for both the work and break duration. For instance, if you set work to 25 minutes and break to 5 minutes, every 25 minutes of work would be met by a 5-minute break prompted by the video platform to do other activities that could involve meditation, exercise, and fun! By setting a mandatory break in relation to the time you work, you are balancing your psychological and physical wellbeing with online work and studying. How we built it Flock was built primarily with React.js as frontend, and Node/ Express as back-end. Video streaming was achieved using Twilio Programmed Video API, with Firebase handling realtime status and emoji updates. Challenges we ran into In the beginning, there were a lot of errors with the npm and getting the files to run through the terminal git. It was quite difficult to set up a Chrome extension due to the involvement of multiple languages and having the need to constantly update it manually every time the code is changed. The presence of background and content scripts also adds a layer of complexity as some functions cannot be executed in certain scripts. Overall, it was a fun journey, and we hope to further develop Flock's features following the hackathon. Built With bootstrap css ern firebase html javascript react twilio Try it out github.com github.com
Flock
Redefining digital lifestyles one break at a time!
['Star Xie', 'Alvin Li', 'Lavan Sumanan', 'Rahma Gillan']
[]
['bootstrap', 'css', 'ern', 'firebase', 'html', 'javascript', 'react', 'twilio']
33
9,982
https://devpost.com/software/vichat
ML Infrastructure Inspiration We have all experienced a massive shift in workplace dynamics over the past 4 months, which will likely become the new precedent for the foreseeable future. One of the largest hurdles of working from home is the lack of social interaction with coworkers, and missed opportunities to meet new people and network among those outside of your direct team. We aim to tackle this issue head on, and make connecting with new people around the office an easy and habitual occurence. What it does Our project integrates directly with slack and uses advanced recommender systems hosted in Azure ML services to match similar individuals within the office. Every week, a new match is made within slack, and employees will have an easy opportunity to schedule a virtual coffee chat and get to know someone new. How I built it The focus of our tech stack for this app has been to experiment with platform based services. In other words, we wanted to use platform service providers such as Autocode and Azure to maintain our infrastructure, and deployment steps, while we focused on making the actual features of each endpoint. As for the actual implementation steps, our frontend is contained in Slack. We used the Slack api extensively in conjunction with the Autocode platform, using everything from group creation, question and answering and user retrieval. The Autocode platform is then linked with airtable, which acted as a simple database. Autocode performed numerous operations in regards to airtable, such as fetching, and inserting entries to the table. Finally, the brain of our hack is the logic, and ML component. All of which is based in Azure. To start with, we hosted a recommendation engine inside Azure Machine Learning Services, we trained the model, and hyperparameterized it via the Azure machine learning studio, which streamlined the process. Then, we served our trained model via Azure web services, making it available as a normal http API. We also built the backend with scalability in mind, knowing that in the future for large companies/user base, we would need to move off airtable. We also knew that for larger companies there would likely be high traffic. Hence, to ensure scalability we decided to use Azure serverless functions to act as a gateway, to direct incoming traffic, and to also allow easy access to any azure services we may add to our app in the future. Accomplishments that I'm proud of The thing that we were most proud of was the fact that we reached all of our initial expectations, and beyond with regards to the product build. Additionally, our platform is entirely based on API as a service, and contains almost zero infrastructure code allowing for easy implementation and a lightweight build, while also demonstrating the power of PaaS, showcased via Autocode and Azure. At the end of the two days we were left with a deployable product, that had gone through end to end testing and was ready for production. Given the limited time for development, we were very pleased with our performance and the resulting project we built. We were especially proud when we tested the service, and found that the recommender system worked extremely well in matching compatible people together. What I learned Working on this project has helped each one of us gain soft skills and technical skills. Some of us had no prior experience with technologies on our stack and working together helped to share the knowledge like the use of autocode and recommender algorithms. The guidance provided through HackThe6ix gave us all insights to the big and great world of cloud computing with two of the world's largest cloud computing service onsite at the hackathon. Apart from technical skills, leveraging the skill of team work and communication was something we all benefitted from, and something we will definitely need in the future. What's next for ViChat We hope to integrate with other workplace messaging platforms in the future such as Microsoft teams to bring our service to as many offices and employees as we can! Built With airtable autocode azure slack Try it out github.com autocode.com
ViChat
Tackling the lack of socialization in a work from home environment using advanced recommender systems hosted in Azure ML services to match employees together every week.
['Shirley Zhang', 'Hanz Vora', 'Shane Ding', 'Hunnain Atif']
[]
['airtable', 'autocode', 'azure', 'slack']
34
9,982
https://devpost.com/software/l-e-a-r-n-rv91cl
Video Frame which is then processed by Azure Face API Video Frame which is then processed by Azure Face API, Subject is detected as not paying attention (offscreen viewing)) Inspiration COVID-19 has drastically transformed education from in-person to online. While being more accessible, e-learning imposes challenges in terms of attention for both educators and students. Attention is key to any learning experience, and it could normally be assessed approximately by the instructor from the physical feedback of students. However, it is not feasible for instructors to assess the attention levels of students in a remote environment. Therefore, we aim to build a web app that could assess attention based on eye-tracking, body-gesture, and facial expression using the Microsoft Azure Face API. What it does C.L.A.A.S takes the video recordings of students watching lectures (with explicit consent and ethics approval) and process them using Microsoft Azure Face API. Three features including eye-tracking, body posture, and facial expression with sub-metrics will be extracted from the output of the API and analyzed to determine the attention level of the student during specific periods of time. An attention average score will be assigned to each learner at different time intervals based on the evaluation of these three features, and the class attention average score will be calculated and displayed across time on our web app. The results would better inform instructors on sections of the lecture that gain attraction and lose attention in order for more innovative and engaging curriculum design. How we built it The front end of the web app is developed using Python and the Microsoft Azure Face API. Video streaming decomposes the video into individual frames from which key features are extracted using the Microsoft Azure Face API. The back end of the web app is also written with Python. With literature review, we created an algorithm which assesses attention based on three metrics (blink frequency, head position, leaning) from two of the above-mentioned features (eye-tracking and body gesture). Finally, we output the attention scores averaged across all students with respect to time on our web app. Challenges we ran into Lack of online datasets and limitation on time prevents us from collecting our own data or using machine learning models to classify attention. Insufficient literature to provide quantitative measure for the criteria of each metric. Decomposing a video into frames of image on a web app. Lag during data collection. Accomplishments that we're proud of Relevance of the project for education Successfully extracting features from video data using the Microsoft Azure Face API Web design What we learned Utilizing the Face API to obtain different facial data Computer vision features that could be used to classify attention What's next for C.L.A.A.S. Machine learning model after collection of accurate and labelled baseline data from a larger sample size. Address the subjectiveness of the classification algorithm by considering more scenarios and doing more lit review Test the validity of the algorithm with more students Improve web design, functionalities Address limitations of the program from UX standpoint, such as lower resolution camera, position of their webcam relative to their face Built With azure flask html python Try it out github.com
C.L.A.A.S.
Calibrated Learning Attention Aware System
['Jack Jiao', 'Celina Shen', 'Brian Lian', 'Roy Lin']
[]
['azure', 'flask', 'html', 'python']
35
9,982
https://devpost.com/software/unmasked-covid-assistant
How our proposed solution works Marketing Pitch / Idea Mask Detection Titlle Page Autocode API Usage Inspiration The inspiration for our hackathon idea stemmed from an experience observed by one of our team members who had recently been to the hospital. They noticed the numerous amount of staff required at every entrance to ensure that patients and visitors had their masks on properly, as well as asking COVID-19 screening questions and recording their time of entry into the hospital. They thought about the potential problems and implications that this might have such as health care workers having a higher chance of getting sick due to more frequent exposure with other individuals, as well as the required resources needed to complete this task. Another thing that was discussed was about the scalability of this procedure and how it could apply to schools & businesses. Hiring an employee to perform these tasks may be financially unfeasible for small businesses and schools but the social benefit that these services would provide would definitely help towards the containment of COVID-19. Our team decided to see if we could use a combination of Machine Learning, AI, Robotics, and Web development in order to automate this process and create a solution that would be financially feasible and reduce the workload on already hard-working individuals who work every day to keep us safe. What it does Our stand-alone solutions consists of three main elements, the hardware, the mobile app, and the software to connect everything together. Camera + Card Reader The hardware is meant to be placed at an entry point for a business/school. It will automatically detect the presence of a person through an ultrasonic sensor. From there, it adjusts the camera to center the view for a better image, and takes a screenshot. The screenshot is used to make an API request using Microsoft Azure Computer Vision Prediction API where it can be used to return a confidence value of a tag. (Mask / No Mask) Once the person is confirmed to be wearing a mask through AI, the individual will be prompted to scan their RFID tag. The hardware will check the owner of the RFID id and add a time checked-in or out for their profile in a cloud database. (Firestore) Mobile Application The mobile application is intended for the administrator/business owner who would like to be able to manage the hardware settings and observe any analytics. _ (We did not have enough time to complete that unfortunately) _ Additionally, the mobile app can also be used to perform basic contact tracing through a API request on a custom-made Autocode API that will check the database and determine recent potential instances of exposure between employees based on check-in and check-out times. It will also determine those employees affected and automatically send them an email with the dates of the potential exposure instances. The software Throughout our application, we had many smaller instances of programming/software that was used run our overall prototype. From the python scripts on our Raspberry Pi to communicate with the database, to the custom API made on Autocode, there were many small pieces that we had to put together in order for this prototype to work. How we built it For all of our team members, this was our first hackathon and we had to think creatively about how we were going to make our idea into a reality. Because of this, we used many well-documented/beginner-friendly services to create a "stack" that we were able to manage with our limited expertise. Our team background came mainly from robotics and hardware so we definitely wanted to incorporate a hardware element into our project, however we also wanted to take full advantage of this amazing opportunity at Hack The 6ix and apply the knowledge that we learned in the workshops. The Hardware In order to make our hardware, we utilized a Raspberry Pi and various sensors that we had on hand. Our hardware consisted of an RFID reader, Ultrasonic Sensor, Servo Motor, and Web Camera to perform the tasks mentioned in the section above. Additionally, we had access to a 3D printer and were able to print some basic parts to mount our electronics and create our device. (Although our team has a stronger mechanical background, we spent most of our time programming haha) Mobile Application In order to program our mobile app, we utilized a framework called Flutter which is developed by Google and is a very easy way to rapidly prototype a mobile application that can be supported by both Android and iOS. Because Flutter is based on the DART language, it was very easy to follow along tutorials and documentation, as well as some members had previous experience with Flutter. We decided to also go with firestore as our database as there was quite a lot of documentation and support between the two applications. Software In order to put everything together, we had to utilize a variety of skills and get creative with how we were going to connect our backend considering our limited experience in programming and computer science. In order to run the mask detector, we first used some Python scripts on a Raspberry Pi to center our camera onto the object and perform very basic face detection to determine whether to take a screenshot or not in order to send to the cloud to be processed. We did not want to stream our entire camera feed to the cloud as that could be costly due to a high rate of API requests, as well as impracticality due to hardware limitations. Because of that, we used some lower end face detection in order to determine whether a screenshot should be taken and from there we send it through an API request through Microsoft Azure Services Computer Vision Prediction API where we had trained a model to detect two classifiers. (Mask and No Mask). We were very impressed with how easy it was to set up the Azure Prediction API and it really helped our team with reliable, accurate, and fast mask detection. Since we did not have much experience with back-end in flutter, we decided to utilize a very powerful tool which was Autocode which we learned about during a workshop on Saturday. With the ease of use and utility of Autocode, we decided to create a back-end API that our mobile app could call basically with an HTTP request and through that our Autocode program could interact with our firebase database in order to perform basic calculations and achieve the basic contact tracing that we wanted in our project. The autocode project can be found here! link Challenges we ran into The majority of our challenges that we ran into was due to our limited experience in back-end development which lead us with a lot of gaps in the functionality of our project. However, the mentors were very friendly and helpful and helped us with connecting the different parts of our project. Our creativity also aided in helping us connect our portions together. Another challenge that we ran into was our hardware. Because of quarantine, many of us were at home and did not have access to lab equipment that could have been very helpful in diagnosing most of our hardware problems. (Multimeters, Oscilloscopes, Soldering Irons). However, we were able to solve these problems, all be-it using very precious hackathon time to do so. What We learned -Hackathons are very fun, we definitely want to do more! -Sleep is very important. :) -Microsoft Azure Services are super easy to use -Autocode is very useful and cool What's next for Unmasked The next steps for Unmasked would be to further add to the contact tracing feature of the app, as knowing who was in the same building at the time does not provide enough information to determine who may actually be at risk. One potential solution to this would be to have employees scan their Id's based on location as well, enabling the ability to determine whether any individuals were actually near those with the virus. Built With 3d-printing autocode azure computer-vision custom-api firebase firestore flutter iot javascript machine-learning opencv python raspberry-pi solidworks ui Try it out github.com autocode.com github.com
Unmasked COVID Smart Assistant
Automation project to automate COVID-related safety procedures
['Samson Hua', 'Sahil Kale', 'Risat Haque', 'soumav maiti']
[]
['3d-printing', 'autocode', 'azure', 'computer-vision', 'custom-api', 'firebase', 'firestore', 'flutter', 'iot', 'javascript', 'machine-learning', 'opencv', 'python', 'raspberry-pi', 'solidworks', 'ui']
36
9,982
https://devpost.com/software/retroreaders_ht62020
Welcome to our demo video for our hack “Retro Readers”. This is a game created by our two man team including myself Shakir Alam and my friend Jacob Cardoso. We are both heading into our senior year at Dr. Frank J. Hayden Secondary School and enjoyed participating in our first hackathon ever, Hack The 6ix a tremendous amount. We spent over a week brainstorming ideas for our first hackathon project and because we are both very comfortable with the idea of making, programming and designing with pygame, we decided to take it to the next level using modules that work with APIs and complex arrays. Retro Readers was inspired by a social media post pertaining to another text font that was proven to help mitigate reading errors made by dyslexic readers. Jacob found OpenDyslexic which is an open-source text font that does exactly that. The game consists of two overall gamemodes. These gamemodes aim towards an age group of mainly children and young children with dyslexia who are aiming to become better readers. We know that reading books is becoming less popular among the younger generation and so we decided to incentivize readers by providing them with a satisfying retro-style arcade reading game. The first gamemode is a read and research style gamemode where the reader or player can press a key on their keyboard which leads to a python module calling a database of semi-sorted words from Wordnik API. The game then displays the word back to the reader and reads it aloud using a TTS module. As for the second gamemode, we decided to incorporate a point system. Using the points the players can purchase unique customizables and visual modifications such as characters and backgrounds. This provides a little dopamine rush for the players for participating in a tougher gamemode. The gamemode itself is a spelling type game where a random word is selected using the same python modules and API. Then a TTS module reads the selected word out loud for readers. The reader then must correctly spell the word to attain 5 points without seeing the word. The task we found the most challenging was working with APIs as a lot of them were not deemed fit for our game. We had to scratch a few APIs off the list for incompatibility reasons. A few of these APIs include: Oxford Dictionary, WordsAPI and more. Overall we found the game to be challenging in all the right places and we are highly satisfied with our final product. As for the future, we’d like to implement more reliable APIs and as for future hackathons (this being our first) we’d like to spend more time researching viable APIs for our project. And as far as business practicality goes, we see it as feasible to sell our game at a low price, including ads and/or pad cosmetics. We’d like to give a special shoutout to our friend Simon Orr allowing us to use 2 original music pieces for our game. Thank you for your time and thank you for this amazing opportunity. Built With css html5 javascript pygame python sass Try it out www.retroreaders.tk
RetroReaders_HT62020
Retro Readers: Arcade Reading Game
['Jacob Cardoso', 'Shakir Alam']
[]
['css', 'html5', 'javascript', 'pygame', 'python', 'sass']
37
9,982
https://devpost.com/software/covistics
Inspiration We thought doing a COVID-related web app was both relevant and important. What it does It allows users to view local COVIDpandemic data. How I built it Using Flask, Python, HTML, CSS, the Google Places API, and data from the New York Times Challenges I ran into Processing the data was a little tricky because of some inconsistencies. Accomplishments that I'm proud of Building the back end by hand What I learned JavaScript! I'd pretty much never used it before. What's next for Covistics Our Google Cloud deployment is slow and unable to update our graphs (it worked locally, but when fully deployed kind of broke down), so doing something about that would be a first priority. Also, expanding outside of the US, and adding more specific data (like geo-locating active cases or number of tests). Built With flask github google-cloud google-places new-york-times python Try it out github.com
Covistics
Get a well-presented and sleek dashboard to give you the overview of the pandemic in your area, along with helpful risk assessments.
['Vaskar Nath', 'Lukas Willsie']
[]
['flask', 'github', 'google-cloud', 'google-places', 'new-york-times', 'python']
38
9,982
https://devpost.com/software/teamote-wv7ics
Logo Title Page Student and Teacher Options Student View Teacher View Video Demo https://youtu.be/_edJf_7ZcLk Project GitHub https://github.com/AnselZeng/Teamote Inspiration As more and more classes are moving online due to the pandemic, it is very crucial to make sure that students are able to learn properly and teachers are able to teach as they did in classrooms. What it does This web app provides real time analysis of the students’ expressions and reports the average emotion back to the teacher in order to provide an accurate representation of understanding and attention. How I built it Teamote contains three major views: a home page, a student's view and a instructor's view. Home page: The home page will have two options: student or instructor. Users must input the classroom code into the text box to enter the classroom. Student's View: Students will be provided with a special classroom code. They can enter this code into the box and click the button to join the classroom. They must turn on their video in order to participate Instructor's View: Upon entering the classroom code, the teacher is taken to the video page. The teacher’s view will display an average percentage of all the student’s emotion data. This will help the instructor recognize if the students are understanding the concepts properly or if a concept should be further clarified. We built this project using React , Django and Azure Cognitive Services Challenges I ran into One of the challenges was creating the backend of this project and implementing the different APIs and components to make the web app function correctly. What we learned As a team we learned to work together to pitch different ideas and features to each other. We learned or refreshed our knowledge of various programming languages, APIs and frameworks. What's next for Teamote With the increase in online learning due to the pandemic, many online tools that aid in remote learning are becoming more popular. Once our web app becomes more popular, we will introduce subscriptions for educational institutions in order to increase the longevity of our product and business. Built With azure django react Try it out github.com anselzeng.github.io
Teamote
Bringing emotions back to the classroom!
['Nareshri Babu', 'Kyrel Jerome', 'Keyon Jerome', 'Ansel Zeng']
[]
['azure', 'django', 'react']
39
9,982
https://devpost.com/software/moodify-la2r4i
With months spent at home, many of us have spent many hours curating various playlists to reflect our moods. For this hackathon, we decided to streamline this process, and create a web app that can offer new playlists for people to listen to. Our project, moodify, can detect the user's mood through auditory or written cues. Users also have the option to select a mood from a dropdown menu within the app. Moodify will then determine the user's mood, before suggesting playlists that the user may like. We built moodify using react and node.js. We also used Microsoft azure API to detect the user's mood and then the spotify API to allow the app to modify and suggest playlists to the user. The app was styled using css and bootstrap. Several challenges that we faced included integrating the azure and spotify APIs. Given how crucial they are to our app, it was important that they were connected and working. We're proud of how our final product looks. Through this project, we improved our web development skills, and developed experience working with various APIs. In the future, we would love to create new playlists for users with unique songs that don’t belong in their own playlist. This way, the user would be able to receive a brand new playlist and discover some new songs. Built With azure bootstrap css html javascript node.js react spotify Try it out github.com
Moodify
Web app that can detect the user's mood and use it to suggest relevant playlists
['Angel Li', 'Danny Chen', 'Ethan Wang', 'Yanni Wang']
[]
['azure', 'bootstrap', 'css', 'html', 'javascript', 'node.js', 'react', 'spotify']
40
9,982
https://devpost.com/software/wfh-planner
Inspiration We asked ourselves the question: How could we make the lives of individuals working from home easier? Our response was to address the fundamental problem of remote work, work-life balance. What it does Work From Home (WFH) Planner is a web application tool that helps those working from home to better manage their day, by auto-generating a schedule for them to follow throughout the day. Users put in the tasks that they need to finish by the end of the work day and we do some work behind the scenes to come up with a timeline that balances work and home life so that users can maximize their productivity and stay mentally healthy. How we built it We used Flask in Python to set up our server and PostgreSQL as our database in the back-end. On the other side, we used standard CSS with Bootstrap for our front-end tech stack. Challenges we ran into We ran into many challenges such as user authentication, back-end logic issues, and hosting our application on the cloud. Accomplishments that we're proud of Despite all of the difficulties, we managed to at least get some basic functionality in our web application and were able to put into practice some healthy development practices such as planning out our UI using a tool like figma, and mapping out our database. What we learned In hindsight, there were many things that we had to do manually since we took such a bare bone approach to development which might have been quicker had we used frameworks like React.js and libraries like authlib. What's next for WFH Planner The best path forward would be to re-examine our tech stack and ask ourselves, based on what we've learned, how exactly are we going to go about implementing the features that we want to implement. If we can see a clear path forward then we will take it, but if there seems to be an easier way by using a certain library or technology not currently in our stack, then perhaps it would be a better idea to factor that in. Built With bootstrap css3 figma flask git github html postgresql python Try it out github.com
WFH Planner
WFH Planner is a tool for those working at home that struggle to find a work-life balance. The planner auto-generates a timeline for you to follow throughout the day balancing work with home life.
['Jesse Na', 'Ryan Li', 'Edward Gao']
[]
['bootstrap', 'css3', 'figma', 'flask', 'git', 'github', 'html', 'postgresql', 'python']
41
9,982
https://devpost.com/software/sonic-sensor
Landing screen When no devices are closeby When atleast one device is nearby Start / Stop the app & Enable / Disable vibrations Inspiration In the challenging present times, many social-distancing aids have been created in order to enforce / assist in social distancing. Most of them are based on Bluetooth or Wi-Fi. However they have their own inherent problems due to the nature of the Most current social-distancing aids are: Inaccurate: Most social-distancing apps use Bluetooth which is unreliable in detecting exact distances due to its wide range (around 10m). Not cross-platform: They do not usually work on both iOS and Android. Lack of privacy: They pose a risk to users’ privacy as they collect sensitive data. Passive: They do not prevent, rather aim to mitigate. Our Magic Solution Our app sends and listens for near-ultrasonic sound to reliably detect any phone in a 2-3m radius to alert the user. It uses the attenuation of the signal strength along with other mechanisms to detect people only in close proximity. Each device acts as a trigger by sending waves constantly which are only picked up by nearby devices, while also listening for waves it can pick up to alert its user. How it Works The entire application is characterized by 3 main processes: Sending (Data to Sound) Encode unique characters to near-ultrasonic frequencies. The sound wave is ensured to be extremely identifiable in any environment. Generates the sound using the oscillator from the web audio API. Receiving (Sound to Data) Listen for a specific frequency range by using a band-pass filter and identify frequencies using real-time Fast Fourier Transform. Decode messages from filtered audio. Move onto the decision making process. Estimation and Alerting After the message has been decoded, 3 factors are considered: Energy - As one moves away from a device there’s a significant attenuation of the signal. This is used to approximate the distance from the source. Data loss - The decoded message is compared against the original message to ensure that there is at least a 60% match between the two. Frequency of occurrence - The number of times the messages are received within a given time-frame. Using the above factors, we determine if another device is within a phone’s boundary. Challenges we ran into The first challenge we ran into was reliably sending data using sound from the browser. This took a lot of research and experimenting to get it to work. There were great resources online but we still needed to get this to work in the browser. Secondly we needed to find a way to estimate if another device is within 2-3m of our device. There weren't any resources available for something like this especially since we were limited to using the API provided by the browser. We eventually managed to overcome this after a lot of experimenting. Accomplishments that we're proud of Created a reliable, cross-platform Data Transfer module built for the web which supports Short-Range Data Transmission using Near-Ultrasonic audio waves . Implemented the same to aid in social distancing with massive improvements over existing solutions in terms of reliability, ease of setup and use while also addressing some of their major flaws. Tech Stack React for front-end WebAudio API The p5 audio library. What we learnt We learnt a lot about using the WebAudio API in very unconventional ways. We spent a significant amount of time in learning to encode data into sound waves and then decoding it back. We learnt how to use different techniques to estimate distances between devices using the above method. We also learnt how to make PWAs to make the entire processing of setup for any application extremely easy. What's next for Sonic Sensor Currently, it is a PWA which was created solely as a proof of concept for ease of testing and demonstration. As the core logic is language-agnostic, it can be easily ported to native applications for Android and iOS. This will address the main current drawback which is the inability to run in the background (due to limited access to native API). [To get around it, we implemented a wake lock (in Chrome) which prevents the phone from sleeping] In addition to it, access to native APIs will enable us to add features like use of earphones while making this app use the speaker and use echo cancellation to further increase the accuracy and response time. With that said, even now, the app can run from your pocket however it requires the speaker to be kept facing outward for reliable performance. Built With javascript p5.js webaudioapi Try it out sonicsensor.surge.sh sonic.surge.sh github.com
Sonic Sensor
A new way to detect people nearby. An offline Progressive Web App that uses near-ultrasonic waves to help you stay safe in public.
['AK5123 A', 'Mohana Sundar', 'Nimish Santosh']
[]
['javascript', 'p5.js', 'webaudioapi']
42
9,982
https://devpost.com/software/meeting-insights-nkrx97
Upload a recording of your video to get transcriptions, summaries and more! View all of your meetings and insights in one central location Inspiration Insightful Meetings is inspired by the numerous virtual meetings transpired by current social distancing measures and the lack of attention of participants during meetings (missing information or forgetting to take notes). With the constant and increasing usage of video communication services, like Zoom or Google Meet, our team was inspired to further leverage functionality and experience by providing insightful meeting analytics. What it does Insightful Meetings is a web application that generates analytics from audio content of video meetings. These analytics include key phrases, named/linked recognition of entities, and sentiment analysis. By providing participants these insightful analytics via a clear and easy-to-use application, they will be able to recall any missed or forgotten information or perform an analysis of key points. This contributes to a deeper understanding of meeting topics/information. How we built it Insightful Meetings is built with a backend consisting of python, Microsoft Azure Text Analytics API, and Google Cloud Speech-to-Text API. The frontend (web app) is built using React.js, Flask for web services, HTML, and CSS. Challenges we ran into The challenges we encountered included making use of automatic text summarizers and not having enough experience with React.js. Accomplishments that we're proud of Firstly, we are proud of implementing an application that is functional and serves a useful purpose. We met our desire to create something useful, especially helping to alleviate pressure and assisting in work during quarantine/remote settings. Secondly, we are proud of the integration of our backend (APIs) with our frontend (React.js). What we learned Throughout the creation of this project, we learned different technologies like Microsoft Azure, React.js, and gained experience working with different APIs. What's next for Meeting Insights Insightful Meetings will further enhance analytics and provide suggestions based on the analysis of analytics. In the future, implementing a desktop program will automatically process every new meeting on any video/audio communication service and display insights and suggestions on a dashboard. Built With azure css google-cloud html javascript microsoft python react speech-to-text sqlite text-analytics Try it out github.com
Insightful Meetings
Leverage remote meeting experience by providing insightful meeting analytics!
['Safeerah Zainab', 'Alex Lu', 'JASON LIU', 'Jennifer Chen']
[]
['azure', 'css', 'google-cloud', 'html', 'javascript', 'microsoft', 'python', 'react', 'speech-to-text', 'sqlite', 'text-analytics']
43
9,982
https://devpost.com/software/farm-to-table-5wmdaj
Inspiration Introducing Farm to Table - A collaboration platform connecting canadian farmers and volunteers to food banks What's the problem? Throughout canada, farmers are dumping large amounts of crop due to the lower demand from the food industry. Farms are forced to continue to grow crops: If demand goes back to normal, they can't risk getting caught behind a crop cycle. This means that as farmers harvest, they are forced to dump their crops, as they're too expensive to ship and there's no demand for them. Where we fit in We know farms who's sails are mainly retail are suffering because of the low demand in the food industry. Crops are very expensive to transport for the already struggling farming sector. And we have many people currently sitting at home with nothing to do! Our Solution Participating farmers could give this extra produce to food banks accepting fresh produce, or food banks could work to buy shipments directly from farms, but transporting produce is expensive: The farming industry is already in trouble due to low sales, and food banks are struggling with the current covid response. This is where we step in, offering a service to connect participating farmers, volunteers, and local food banks. What it does Farm to Table has 2 use-cases: the Volunteer and Farmer perspectives. Farmers can sign up through the registration system and enter available times into a calendar. Volunteers can then browse through the available times of various farms, sorted by distance from the volunteer's location. After selecting a time, the farmer is notified, a google calendar event is generated for the volunteer and a google maps route is generated and attached to the calendar invite. How we built it Backend - Python, Flask Front End - JavaScript, React APIs - Google Calendar, Google Maps Geocoding, Google Maps Distance Matrix, Google Maps URL Schemes Challenges we ran into So, as with any project, there are always unforeseen challenges. For us, one of the largest challenges was integrating multiple moving parts into on cohesive system. There are multiple different APIs, and services that we leverage, that need to all work harmoniously together. This was especially an issue, when dealing with date time objects that were formatted differently between JavaScript, Python and MySQL. As well as debugging why some SQL commands / queries were failing was challenging! Finally, while intuitive, we were new to using the google maps Geocoding and Distance Matrix APIs, so a lot of initial debugging was involved! Accomplishments that we're proud of We managed to get a prototype of Farm to Table working end to end! The system is fully functional and all the features that we planned on initially including in this project have been implemented. What we learned Using Google Maps URL schemes to generate routes is a lightning fast, lightweight, and cross-compatible way of communicating directions from point A to point B (and in our case, from point A to B to C) Google Calendar API is a fast way to coordinate events across multiple participants Integrating in react-big-calendar with custom event objects What's next for Farm to Table A separate interface for food banks! food bank collaboration could include support for displaying available pick up times, showing which types of produce each food bank accepts. Finally, adding in additional route tracking information , so that volunteers can give more accurate updates to farmers and food banks. Built With flask google-cloud google-distance-matrix google-geocoding google-maps mysql python react Try it out github.com
Farm to Table
Farm to table - A collaboration platform connecting canadian farmers and volunteers to food banks
['Julia Paglia', 'Amer Alshoghri', 'Joshua Abraham']
[]
['flask', 'google-cloud', 'google-distance-matrix', 'google-geocoding', 'google-maps', 'mysql', 'python', 'react']
44
9,982
https://devpost.com/software/fooo
Inspiration Due to recent events, companies are continuing to encourage or require remote work. One of the downsides to working remotely is the loss of connection you feel with all your co-workers. From being able to turn around & see that your co-worker has left for a quick break, to now having to guess whether or not they’re ignoring you on purpose on Slack, many people are finding remote work challenging as a result. Our project FOOO attempts to address this problem. What it does Our hack allows co-workers to communicate their statuses to one another without explicitly pinging their team. Unless you’re going to be AFK (away from keyboard) for an extended period of time, it’s not likely that you’re going to update your team about your whereabouts (i.e. going to the bathroom or grabbing a quick snack). This is especially true for new employees at companies, who are more reserved. We wanted to mimic an office setting where you’d normally be able to see what people are up to around the office. How we built it We used React Javascript to outline the components in our project, with Material-UI for styling. We used Konva, a 2d canvas library that allowed us to dynamically render the office spaces and avatars. Finally, of course, we made use of the Spotify Web API to add the Spotify information. Challenges we ran into One of us hasn't touched React in a while, so it was definitely difficult to pick up again. Additionally, using new technologies like Konva and the Spotify API, come with a learning curve. Especially Konva, which heavily uses HTML canvas elements, something we have never worked with before. Of course the time constraints of a hackathon was also a challenge for us. What's next for FOOO The most important aspect of this project we definitely want to address sometime is making this live to allow multiple people to utilize it at the same time! Some nice-to-haves are to allow for more customization and to have more extensions, like Slack for example for messaging! Built With html javascript Try it out github.com danielaraujo.dev
FOOO
To help with the loss of connection you feel working remotely, FOOO allows co-workers to communicate their statuses to one another without explicitly pinging their team on a virtual office setting.
['Jeanie Ng', 'Daniel Araujo']
[]
['html', 'javascript']
45
9,982
https://devpost.com/software/yeetsaber
Inspiration We were looking for web multiplayer games for working remotely, some friends like to play Beat Saber for exercise, and some wanted to play Beat Saber but didn't have a VR headset. What it does Yeet Saber is a web based multiplayer game that uses phone orientation/rotation data and websockets to enable playing Beat Saber with your phone as the controller for hitting blocks. Also, you can create and join rooms with a room ID to play the same Beat Saber song as your friends and see their scores, and upload official Beat Saber maps to play. This would be beneficial for de-stressing and taking breaks when working remotely, and getting some stretching and exercise. How we built it We obtained the device orientation using the deviceorientation event. Then, we forwarded this to our backend server. Our backend was written in Node.js. It acts as a proxy between people's device controllers and their desktop browsers. It also allows people in rooms to see each other's scores and states. It sends controller orientation data to the desktop it was paired with. The desktop page then uses Three.js to display 3D blocks. It also allows room hosts to upload Beat Saber maps so that everyone can play on the same map at the same time. Challenges we ran into We have never used DeviceOrientationEvents before and don't have that much experience with quaternion 3D rotations, so we spent a long time figuring out what the quaternions from the phone's orientation represent in terms of the x/y/z rotation of the blocks in game. We could only get the relative orientation of the phone and not the absolute position in 3D space, so we couldn't create all Beat Saber features like obstacles. We had to account for the fact that, for example, when you point the top of your phone to the left, there are multiple orientations which should all count as "rotating left" (screen facing up, down, toward you, away from you). Additionally, it turns out that the compass heading the DeviceOrientation events return are not consistent and drifts. Therefore, we had to make our calculations insensitive to the phone's compass heading. Hit detection was also fairly difficult, but we solved it by computing the cross product of the current rotation vector (where the top of the phone is pointing) and the previous animation frame's rotation vector to obtain the angular velocity. We then took this velocity and computed the dot product with each block's expected angular velocity. If these 2 vectors line up, it represents a hit. Another challenge is recording the demo since streaming and video call platforms have a lot of latency but we wanted to show the multiplayer features. Accomplishments that we're proud of We got more experience with 3D rendering in Three.JS and networking with websockets. Also, we actually got some code done despite having to work remotely and with only 2 people. What we learned Too many console.logs will crash Chrome Chrome and modern browsers do a lot of things to prevent spam (such as preventing autoplay of media) What's next for YeetSaber More lights and colours of blocks, just like the light effects of Beat Saber Improved rotation detection, with possibly position tracking with the help of AR systems. Built With javascript node.js three.js webgl websockets Try it out github.com
Yeet Saber
Web multiplayer Beat Saber using your phone
['Jacky Liao', 'Cindy Wang']
[]
['javascript', 'node.js', 'three.js', 'webgl', 'websockets']
46
9,982
https://devpost.com/software/phisherman
PhisherMan Logo Inspiration Akash expressed interest in creating a hack that integrated Machine Learning to assist in catching fraudulent bank transactions. Inspired by this proposal, Alex suggested a new idea that would assist in combating fraud and identity theft directly by tackling the widespread problem of phishing. Having worked as a phishing mitigation specialist for a company that managed phishing reports for large institutions, Alex was more familiar with the technical requirements and the workflow of phishing threat management. Identifying phishing emails and filing incident reports is costly, both in terms of time and money. A solution was needed to keep people safe from phishing threats in a timely and inexpensive manner! And thus, PhisherMan was born. What it does PhisherMan is a phishing threat management tool that receives reports of potential phishing threats, quickly filters through spam and automatically files malicious pages in its internal database for further action. When PhisherMan receives a suspicious email, it uses a machine learning algorithm to determine if the email is indeed a phishing threat or spam. If the system determines the email as a phishing threat, it creates and stores it in a database to be handled and mitigated. This saves a lot of time performing manual labor so that serious threats can be identified and addressed as soon as possible. How we built it Akash was responsible for creating and training the Deep Learning model as well as stitching the application together and deploying it to the cloud. Alex was responsible for creating the basic backend REST API in Django for management of incidents. Tina and Aariana were responsible for the front-end and graphics. Tech Stack we used: Tensorflow for deep learning model Django + REST Framework for backend React and Material-UI for frontend Conda for package management Docker for containerization Photoshop for logo creation & mockups AWS for deployment Git (Github) for version control Audacity for voice recording Sony Vegas Pro for video editing and production OBS Studio for demo screen capture SQLite for database Postman Challenges we ran into Having come from various backgrounds and experience levels, the biggest challenge shared by all team members was learning new tech stacks in an accelerated manner to tackle different aspects of the hack. Aside from that, Akash faced more specific challenges - for example, finding a phishing dataset was quite challenging (after all, they aren’t readily available), as well as ensuring the code was platform agnostic and resolving version conflicts in conda and deployment. Accomplishments that we're proud of Managing to complete a project with a remote team of people with vastly different experiences, using our talents in creative ways and learning new technologies at an accelerated pace. What we learned Each one of us learned something new in regards to the tech stack. Alex has never worked with Django before and had done web development quite some time ago, making this experience a nice refresher and a dive into a new framework. For Akash, practice with machine learning and usage of new DevOps technologies was a big takeaway from this hack. Tina was able to use her creative side to design mockups and graphics for the front end of the application as well as take a dive into new front end technologies. Aariana built upon her front-end development skills, using React for the first time to create the front-end of the application. What's next for PhisherMan In the future, PhisherMan can be expanded to grab the contents of potential phishing threats and identify if they are specifically targeting a protected client, or even a specific individual in case of spear-phishing. If that is the case, PhisherMan will be able to cross-reference previous incidents and classify similar pages as part of the same attack. In addition, PhisherMan’s functionality can be extended to find and classify links as redirects, malicious domains, or hijacked websites. These, in turn, can be automatically managed and addressed through an internal reporting system working closely with internet service providers, registrars and international internet security agencies. Built With amazon-web-services audacity conda django docker git github material-ui obs photoshop react sony-vegas-pro tensorflow Try it out github.com github.com
PhisherMan
PhisherMan is a phishing threat management tool that receives reports of potential phishing threats, quickly filters through spam and files malicious pages in its internal database.
['Alexander Mastryukov', 'Aariana Singh', 'Akash Singh', 'Tina Zhang']
[]
['amazon-web-services', 'audacity', 'conda', 'django', 'docker', 'git', 'github', 'material-ui', 'obs', 'photoshop', 'react', 'sony-vegas-pro', 'tensorflow']
47
9,982
https://devpost.com/software/fire-insurance-finder-fif
FIF Inspiration Fire Insurance Finder or FIF is inspired the large amount of Wildfires in Canada which burn approximately 2.5 million ha of land each year according to Canadian Wildlife Federation. We wanted to design FIF to identify areas of crisis, as well as help deal with insurance claims efficiently by showing a map of the damaged areas. What it does FIF has a 97% confusion matrix true positive accuracy rate and provides an automation of arduous structure damage inspection process which is currently a manual process. Through features such as real-time damage inspection, it can improve post-fire recovery, help homeowners, as well as insurance policy makers. FIF allows users to view a real time map of the area affected by fires and shows which houses were affected by the fire and which were not as the user uses a slider to move across the screen. The houses affected by the fire show up as red in colour. How we built it Fire Insurance Finder does the following when a new user uploads an aerial survey: FIF takes in an aerial survey through a dedicated Google Drive and stores it in the required format for running the model. Partitioning: In this step, FIF uses a pre-trained model to segment all structures in a landscape. Clipping: Once the structures are segmented, then based on the average building size in scene, square scenes centering the structure are cropped from the landscape Classification: The cropped images are then classified as “damaged building” or “not damaged building”. For our classification model we performed transfer learning using Pytorch. More specifically, we used a ResNet18 network architecture pre-trained on Imagenet. Rebuilding: After it has classified each cropped image, FIF moves the images from pixel-space to geospace by remapping the cropped images onto the original landscape scene. Visualization: The web-app is renovated with the new aerial survey and with damaged shown in red colour. Challenges we ran into It was initially hard to pick an idea because we wanted to make something that can be used for insurance as well as for the social good. We also had trouble initially in image processing and learning Google's API but we overcame that. Accomplishments that we're proud of We are proud of being able to provide a real-time aerial view that can provide a clear big picture in the aftermath of a devastating fire. We are also proud of coming together as a team in a short amount of team to build the FIF. What we learned We learned how to train models, as well as use Google's API. Some of our teammates also learned how to do react programming for the first time. What's next for Fire Insurance Finder (FIF) As a team, we would like to continue developing FIF as an open-source platform since this project can truly help multiple stakeholders affected by the devastating wildfires. Some of our plans include the followings: ->Reach out to policy makers as well as first responders to test and improve FIF ->Use data driven approaches to augment training examples to better the detection accuracy ->Improve image resolutions Built With colab cuda google-maps pytoch react Try it out github.com
Fire Insurance Finder (FIF)
Fire Insurance Finder helps deal with fire insurance claims efficiently
['Manik Chaudhery', 'Iris Guo', 'Bruce He', 'Dea Chaudhery']
[]
['colab', 'cuda', 'google-maps', 'pytoch', 'react']
48
9,982
https://devpost.com/software/a-brief-introduction-to-quantum-mechanics
Animation 1 - Non-Colliding Nuclei Animation 2 - Quantum Tunneling Inspiration We were recently introduced to the world of Explorable Explanations (by Nicky Case!) and were immediately enthralled by its educating yet engaging nature. Inspired, we decided to present the concept of quantum mechanics to better engage the youth who have not yet encountered this phenomenon in class. What it does The code that we built is a web page for educational purposes. It talks about the applications and different phenomena of Quantum physics in a very simple way so that even high school students can understand it clearly. The webpage contains simple animations for demonstrations of the phenomena. How I built it We built the webpage using HTML and CSS and created small animations using Javascript and an open-source library (EaselJS and TweenJS from create.js). Challenges I ran into Jennifer: Working through the eye strain! I had some experience with HTML and CSS prior to this hackathon, but having to relearn the basics and take on a completely new language (Javascript) was an incredible challenge. From the numerous crash courses consumed to the constant research and debugging I’ve had to do in the past two days, I realized just how different of a language Javascript is compared to my experience with Python. Karen: As a beginner programmer, trying to learn coding was a huge challenge. I had limited knowledge with HTML, and no experience with CSS or JavaScript; the latter being the most difficult to understand! This hackathon was also my first concrete experience with programming and also an introduction to software development as a whole. Debugging was also a pain. Yudi: I have never dealt with a front-end language before. In my past experience, I have only worked with C and python, and software like MATLAB. Learning three languages in a short amount of time was the biggest challenge that I have. Our team decided on the topic and language the night before the hack, and all three of us binge-watched loads of Youtube crash courses. This was also my very first hackathon experience! Accomplishments that I'm proud of Jennifer: Having actually learned and implemented a new programming language in such a short time span. There were many things on our page that I thought we wouldn’t be able to accomplish. It was an absolute pain trying to create our small animations and there were many times where I felt like giving up. We also played around with user input (and created an unrelated drawing game on the side)! But in the end, I’m proud of all the work we’ve done and for continuing to persevere through our many hurdles. As simple as our web page may be, it was the result of a lot of hard work and it was a great learning experience for all of us! Karen: In preparation for this hackathon, I attempted to learn three languages over the course of a few days. Software development was always an intimidating field to me, but I am proud to be able to understand the basics of coding - even though I’m only familiar with three languages thus far. As a non-STEM student, coding is something I’m glad to have pursued! Yudi: I was kind of frustrated that I didn’t understand some of the things that were going on, but in the latter half of the hack, I was able to overcome some of the confusion. Although I felt I haven’t contributed that much to the team, I did learn a lot; not just from the crash courses, but also from my teammates. We had many things that we had to clarify and google, but in the end, we grit our teeth and pull through. What I learned Jennifer: I hate semicolons and have a new appreciation for Javascript. I also learned that centering divs is unnecessarily painful and have fallen in love with documentations. As well, I learned about my dire need for caffeinated chocolate and miss it dearly. On a more serious note, I learned a lot about web development! Javascript is a more flexible language that I had anticipated and its capabilities astounded me. As I struggled to stay awake, I tried to deploy our page to the web and it somehow worked (thanks Surge). Karen: Beyond understanding the basics of coding, I also familiarized myself with the work culture surrounding software development. The community is incredibly talented and ambitious, which instills fear within me. Yudi: I observed that humans work incredibly fast and efficient under pressure :) I really appreciate how the three languages come together so smoothly. But then again, I need to learn more about all these three languages to use them fluently. Through this experience, I realized how little I know about coding, and how vast and versatile this field actually is. What's next for A Brief Introduction to Quantum Mechanics As we work on our Javascript, HTML, and CSS knowledge, we can further develop this website to be more interactive and pretty. In addition, we can add other topics to the web page. Built With create.js css3 html javascript Try it out qm-introduction.surge.sh
A Brief Introduction to Quantum Mechanics
Inspired by Explorable Explanations by Nicky Case, this webpage aims to introduce the basics of Quantum Mechanics to youth.
['Jennifer Tram Su', 'Karen Kan', 'yudi7863 Wang']
[]
['create.js', 'css3', 'html', 'javascript']
49
9,982
https://devpost.com/software/veil
After clicking "Veil", reveals true mission and purpose About Page, animation logo below Shop section Phoneline Resources hidden under "Knitted Scarf" product After clicking, shop section reveals actual titles of resources After clicking "Find Stores", reveals "Find Shelters", markers can be clicked for more info about each shelter Google Maps API with markers Home page, scrolled down Home page Inspiration As everyone around the world is affected by 2020's global pandemic, one overlooked demographic is domestic abuse victims. According to the New York Times, with families in lockdown worldwide, hotlines are lighting up with abuse reports, leaving governments trying to address a crisis that experts say they should have seen coming. In Ontario, resources and physical centres have been limited due to little budget, risk of COVID, and lack of attention and priority from staff. As we should be all supporting all women in our community, our team decided to this unaddressed issue in the form of Veil. What it does Veil is a web application disguised as a clothing store site. This form of concealment allows any user to access the website, without fear of their abuser finding out either in person by watching them, or through internet search histories. The “Shop” section of the site contains three types of resources, each disguised under a product. These resources are phone-lines, treatment centres and treatment services. Each resource is under a fake product name, which when clicked, can reveal the real title. The “About” page features a short description about the clothing store, but once clicked it reveals the real description of Veil, for users who are curious to know what the site is and its purpose. Under the “Find a Store” tab, there is an interactive Google Maps that has various markers hidden as “Store Locations”, but in reality are women’s shelters in the GTA. Once clicked, they show an info box with more information about the shelter including phone number, website and address. How we built it The site was built using HTML, CSS, and JavaScript. We also used the Google Maps Javascript API to display locations and hosted the website through GitHub pages. We used Visual Studio to code simultaneously as well as Git and Github for source version control. Challenges we ran into We were planning to implement a web scraping tool that uses the Puppeteer Node Library but had issues connecting the back, front end and TypeORM. This would have made the location-based resource finding more accurate and plentiful. A challenge throughout this project was finding methods to make this web app as intuitive as possible for the user, while also concealing it from abusers. We were able to successfully do this by creating on click event listeners to change the text on several sites. Veil has an easy to use UI that allows for completely discrete viewing. Accomplishments that we're proud of This hackathon has been a huge learning experience for all of us, but what we are most proud of is having a functional model of an idea that we came up with less than 36 hours ago! This web app is something we know has the capacity to help a very vulnerable demographic in Ontario, but even beyond. We're so proud to be using our tech skills to help make a change in the world. What we learned We were able to learn the basics of web scraping, how to use the Google Maps API, hosting a site as well as using Git and Github for version control! What's next for Veil Next for Veil, we hope that we can scale it but using effective marketing to spread the word to domestic abuse victims without abusers having knowledge of this. We would also want to add our web scraping feature so the location-based resource finding is more accurate. Built With git github google-maps html/css javascript visual-studio Try it out shopveil.tech
Veil
Secure resource space for domestic abuse victims
['Tara Rafi', 'Jennifer Zhang', 'takore05 Kore']
[]
['git', 'github', 'google-maps', 'html/css', 'javascript', 'visual-studio']
50
9,982
https://devpost.com/software/time-s-up
landing page create meeting page join meeting page timeline page about page mockups mockup close-up wireframes Airtable What it does These days, everyone is working remotely. We know from personal experience how easy it is for meetings to go off on tangents, or go on was longer than we intended. We want to helps team better organize their meetings by visualizing the amount of time spent per topic. Each topic is represented as a block on a timeline, like a weekly calendar, but on a more granular time scale. The block that represents each topic is proportional to the time allotted to that topic which makes it intuitive to the user to help prioritize the attention spent on certain topics. Plans included allowing users to record notes within the app, as well as seeing a red line representing the current time on the timeline. Plus, users are meant to be notified via computer notification about when to switch topics, to avoid spending too much time on a single topic. However, we weren't able to implement these features due to time constraints. Regardless, we came pretty far and are excited for you to see it! How we built it First, we designed a wireframe in Figma. After, we polished the layout and designed the mockup. This allowed us to get an idea of the features we wanted before we started coding, consequently helping us to split the work amongst ourselves. The actual site was built using the React front-end framework with the Chakra-UI component library: https://next.chakra-ui.com . The site is connected to our Airtable backend using Autocode which maintained a database of all meetings taking place. The React project was able to send requests to Airtable's API via Autocode to populate the database with records of the meetings including the name and duration. The notifications were implemented using a variety of open-source and individually-written code which were combined to allow for time-based response depending on user input. This also allows for users to block or allow notifications, giving the user autonomy. Challenges we ran into Due to the difficulties of remote working, communication was naturally slower because we weren't beside each other. With several member using different operating systems, the differences in development environments also threw a wench in our planning. Our variety ranged from a Mac to a Chromebook, making for lots of difficulty with runtime errors. Additionally, towards the end, we experienced issues with merge conflicts which took time to resolve. While these posed significant hurdles in our hacking experience, we were able to overcome them to bring you our application. Accomplishments that we're proud of We successfully connected our website to Airtable via Autocode. Additionally, the website looks pretty similar to the mockup, and we were glad to offer an experience similar to the one we envisioned. Creating something with so little time is always difficult, and we're proud to have kept pushing forward when at any time we could've just given up. What we learned This was our first times using Autocode and Airtable! None of us are React experts as well, so this was a big learning experience that expanded What's next for Time's Up! Ideally: Add the note-taking feature Implement the red line feature Add user accounts, to let users keep a history of meetings and their notes, and help them track the amount of time spent on a topic over time add tags to each meeting topic, for filtering/sorting of meetings and notes Built With airtable autocode javascript react Try it out github.com github.com
Time's Up!
Get ahead of time and master your time management skills.
['Kashfia Mahmood', 'Sabita Tasnim', 'Kathleen Wang', 'Davendra Seunarine Maharaj']
[]
['airtable', 'autocode', 'javascript', 'react']
51
9,982
https://devpost.com/software/e-charts-otfbeh
Unoccupied Bed Add Patient Page Confirm Delete Patient Page Modify Patient Information Home Page with Top Down Layout Patient Information Page Inspiration We can all agree that nurses are at the heart of our healthcare system, especially during this pandemic. Inspired by their dedication, we decided to make a web app and notification system that will help nurses in their everyday work. What it does We present to you E-Charts , a tool that nurses can use to keep track of patients’ information and remind them of patient treatments via text messages. How I built it We make use of Azure function apps and Twilio for text message notification purposes, MongoDB for the backend, and Flask for development. Challenges I ran into None of us have used Azure before. It was a challenge to set it up and learn about their function apps throughout the course of this project. Additionally, most of us did not have prior web development experience. Learning HTML, CSS, and JavaScript in addition to familiarizing with Azure proved to be a steep learning curve we had during this hackathon. Accomplishments that I'm proud of Our team is proud of overcoming the challenges we faced and deploying a website that works and is a representative prototype of our idea. What I learned We learned how to use HTML, CSS, and JavaScript for front and back end web development. We also learned how to use triggers and bindings with Twilio's API to create an Azure function app that send text message notifications. What's next for E-Charts There are several features we can add to our web app and SMS notifications. For the web app, we can work on improving the user interface to make it more aesthetically pleasing. With the SMS notifications, we currently send texts to one phone number. In the future, we can implement a feature that allows multiple nurses to receive notifications for their own corresponding patients. Built With azure css flask html javascript mongodb python twilio Try it out e-charts.herokuapp.com
E-Charts
Nurses are at the heart of our healthcare system. Inspired by their dedication, we made E-Charts, a web app and notification system that will help nurses in their everyday work.
['Tiffany Yau', 'Calvin Ma', 'zhu jiadai', 'Zoie Hou']
[]
['azure', 'css', 'flask', 'html', 'javascript', 'mongodb', 'python', 'twilio']
52
9,982
https://devpost.com/software/ufood
ufood logo Inspiration Food is being wasted at an alarming rate. According to the United States Environmental Protection Agency, around 94 percent of the food we discard - from uneaten leftovers to spoiled produce - winds up in landfills or combustion facilities. This problem is usually caused by the ignorance of individuals due to their busy schedule.We purchase food we don't have time to cook. We forget about leftovers in the back of the fridge. And we toss out food that is already expired, assuming it must be dangerous to eat. To alleviate this problem, we created ufood so that people can track down their food inventories at home in order to reduce food waste . What it does uFood is a personal inventory tracker that keeps track of the food in your kitchen. uFood will help you determine the amount of food waste in your home and reminders for food that will soon expire. uFood will categorize your foods into their respective categories (i.e. fresh produce, fruits and vegetables, dairy products, beverages, general frozen, and medicines) and show their respective expiration dates. How we built it We built this site on the Wix website builder. Challenges we ran into As this is our first ever Hackathon , there are a lot of challenges that we went through as a team. We all have very limited coding experience, so creating something without code seemed difficult and impossible. However, we managed a way through Wix.com. Another challenge we faced was connecting and updating the information on each page as the user submits new entries of food. A special challenge our group faced was a timezone difference between the members of our group, as one lived in Canada , one lived in Sweden , and one lived in the Philippines! Accomplishments that we're proud of We are proud of being able to research online and find answers to many questions that we had regarding databases and submission forms on Wix. Furthermore, we are proud that we created something without using code, and with basically no experience in hackathons. What we learned We learned how to create a website on Wix, and how to make it user-friendly. What's next for uFood We will add reminders/notifications functionality, and user accounts. Built With wix Try it out hannahgao01.wixsite.com
uFood
uFood, food for you!
['Christina Chen', 'gaohannah Gao', 'Andre Villanueva']
[]
['wix']
53
9,982
https://devpost.com/software/gardener-6ux7bk
Sample garden template Inspiration Because of the current quarantine, many people are discovering new passions and hobbies. We wanted to make an app that would help beginner gardeners to arrange a virtual garden. What it does Gardener lets the user arrange plants on a grid of soil. When the user searches for plants, they can also see helpful information about it. How we built it We used the Django stack, so a python backend and standard html5, css3 and es6 frontend. We implemented the fabric.js library to generate images with interactive properties. Challenges we ran into We were searching for an api that contained the growth time of plants and how much water they require, but we couldn't find one so we couldn't implement our idea for scheduled reminders. We were against web-scraping because it would be difficult to maintain over a long period of time. Accomplishments that we're proud of We are proud that we were able to keep the UI simple and intuitive to use. What we learned We learned that imagining an idea is straightforward compared to finding the data resources to realize that idea. Although realizing the idea may have been difficult, the sheer amount of information we took in and learned is amazing. As individuals, it's difficult to learn based off tutorials on Youtube and blogs, but once you're thrown into the thick of it, you begin to understand the program in a way you never did before. What's next for Gardener Ideally we'll want to package it via docker and then run on a serverless service such as google cloud. Scaling up would be nice after we incorporate the front end with the backend of the api. Built With css3 django es6 fabric.js html5 https://trefle.io/ visual-studio Try it out github.com
Gardener
Plan your garden
['Manjot Hunjan', 'brandon-cmd Ye', 'Daven Boparai', 'Marco Lai']
[]
['css3', 'django', 'es6', 'fabric.js', 'html5', 'https://trefle.io/', 'visual-studio']
54
9,982
https://devpost.com/software/dot-neohxv
Inspiration Intact Insurance has the been the leader of INsurance market in Canada. their promise of 30minute claim process is very good for customers and hard for company to keep up. But with new technologies coming everyday this task will be easy for them to fulfill and gain more customers with their cutting edge technology based services. What it does Dot is a website created to demonstrate the new technologies that can be incorporated with Intact Insurance for greater User Experience and faster Claim processing. This project can be divided intotwo parts. Image Recognition for faster assesment of damage and clearer picture. So a customer who claims for a say Car insurance uploads his damage pictures for Insurance claim. But there is no guarantee that he uploads exact pictures needed to asses the damage and Insurance company can't just reject claim as they are nont good and need to keep up with 30min time limit. So we have used Open CV with Python to make blurred images more clearer and so on. With this Intact can correctly measure the damage and process the claim in time and maybe even faster! Voice based assistant for User Experience Users might have an APp for Intact Insurance and they can track their claim status and al the details regarding their insurrance. But it lacks the human touch of agent who clarifies the doubts of this lengthy process. So we have added a Voice assistant which can tell about latest plans, about the company and even purchase insurance. This is tool really brings in lot of ease to User Experience. How we built it We used Python along with OpenCV for Image recognition and correction part.We have used Voice assistant and Frontend technologies for the website part.It has the following tools Image recognition Python OpenCV Makes Blurred images to more understandable format Even helps in identfying number plates and other details Voice Assistant Bootstrap HTML/CSS/JS Bootstrap Framework Use of JQuery Use of SmartForm for Contact Frontend Framework GitHub File Management Hosting Node Js Challenges we ran into There were many challenges we ran into, but that's what programming's all about. One of the difficult challenges we ran into was making sure the image reogniton technology works and makes it easier for us to look. Also we have struggled a lot with integrating Image Recogniser to website and couldn't complete it. Accomplishments that we're proud of We are proud of so many things. We made use of this project to the best of our abilities in this 36 hours of time. We got to use the OpenCV, which is a first for all of us, we had never used OpenCV before and now we will continue to use this platform. Additionally, we combined all of our skills to create a website that use multiple frameworks and we are proud of this website. We love the UI/UX and we love the Backend, it was our first time as well using these frameworks. Finally, we are proud of the amount of work we pulled of in 36 hours. We would have never thought we could accmplish this much in such a small amount of time. What we learned Creating realtime databases OpenCV User Authentication Voice Assistant for website manipulation and data transfer What's next for Dot Make a good Website Add all kinds of backend features for database storage. Integrate Image recognition tech to website Built With javascript opencv python voiceflow Try it out github.com
Dot
Project for ht62020
['Abhijith Gunturu', 'Mohinish Teja', 'vibhav chirravuri']
[]
['javascript', 'opencv', 'python', 'voiceflow']
55
9,982
https://devpost.com/software/weathery
Inspiration I wanted to create a simple web application working with Rust and WebAssembly for the first time, and I had never created a weather web application before, so I decided to make Weathery. What it does Weathery is a weather web application created with Rust, Node.js, and WebAssembly. Just type in the city name (Example: "Toronto" or "London, CA") in the search bar and press the "Get Weather" button on the screen or press the Enter key on your keyboard to get the current weather information for that city. Weathery will display the weather in both Celsius and Fahrenheit. How I built it I built the static frontend with HTML 5, CSS, JavaScript, and Bootstrap. I also used the particles.js library to create a cool particle effect on the screen as well. For the backend, I mainly used Node.js to handle all the weather query requests and processing, and I used Rust to create a function to convert between Celsius and Fahrenheit, which I compiled into WebAssembly and then to JavaScript for the Node.js services to use. I implemented the OpenWeatherMap API to get the weather information, and I used Docker with the Second State Virtual Machine (SSVM) engine to easily compile the Rust code to JavaScript. For version control, I used Git from the command line and GitHub. Challenges I ran into The biggest challenge I had was installing Docker to run the Node.js backend and SSVM engine for the application. I was not able to install the latest version of Docker Desktop for Windows due to it requiring the latest version of Windows 10 updates, so I had to install Docker Toolbox instead, setting up port forwarding in Oracle VM VirtualBox to run the web application on localhost. Accomplishments that I'm proud of I am proud of working with Rust and WebAssembly, both of which are new and exciting technologies that are starting to be used seriously due to their major gains in performance for web applications. What I learned I learned how to implement Rust and WebAssembly with Node.js and how to compile and run the code with the Second State WebAssembly engine and Docker. What's next for Weathery I want to display more weather information in Weathery, with options such as a 7-day and hourly forecast. Built With bootstrap css docker git github html node.js openweathermap particle rust ssvm webassembly
Weathery
A weather web application created with Rust, Node.js, and WebAssembly.
[]
[]
['bootstrap', 'css', 'docker', 'git', 'github', 'html', 'node.js', 'openweathermap', 'particle', 'rust', 'ssvm', 'webassembly']
56
9,982
https://devpost.com/software/scopenote
ScopeNote Home screen for ScopeNote Sample keywords list Add your own keywords or summary points! ScopeNote is a Chrome extension ScopeNote helps format key words into flashcard-like templates ScopeNote also creates a summary document for future reference Inspiration Friday, March 13th, 2020. This was the final day that many students in Ontario were in a classroom face to face with their teachers. At this point, we, along with the rest of the world, were forced to transition to remote learning environments in an effort to fight the Covid-19 pandemic. Nobody was prepared to react to this drastic change, and it was by far the students who suffered the most from this change. We quickly realized just how ineffective remote learning could be: with teachers too busy to handle individual queries, most of our education revolved around doing individual work in front of a screen, often reading or typing for hours for research, or watching countless videos on end. Most times, it was the most engaging experiences at school, where we were able to physically grapple with the information we were learning, that were taken away. The motivation was lacking too: with little peer interaction, we often struggled to keep working and meeting deadlines. This was further compounded by the aforementioned monotony of our tasks, which made many of us fall asleep of boredom. All together, learning became extremely difficult. With ScopeNote, we wanted to eliminate the sense of repetitive monotony that comes with remote learning in favour of a more engaging learning process. With teacher impact minimized, the goal was to decrease the time students need to spend in front of a screen doing low-engagement, low-effort tasks, such as reading research papers, in favour of ones that actually benefit memory retention, such as annotating and using flashcards. What it Does ScopeNote is a Chrome extension that provides three main features that supplement a student’s learning. The first is a keyword breakdown of a given article in the form of a PDF file or a website. This component analyzes the text within an article and identifies around fifteen of the most prevalent, critical keywords to a piece. In practice, this facilitates a student’s understanding of a paper by reducing the need to consistently search up most of the specific keywords in favour of having the important ones displayed on screen and only one click away. Secondly, ScopeNote pulls some of the most important sentences from the piece that is meant to act as a summary for students to use both to ensure the content they are looking through is relevant and as a review section in their notes. While we noticed that this function was not entirely perfect, it did a fair job of capturing the tone and the content of the piece, which are also both relevant in determining how useful a piece may be for a research project or as a study resource. Again, this helps reduce the amount of time a student spends on low-effort tasks such as skimming through sources in favour of active studying or analysis. In both of the aforementioned features, students are also able to add their own commentary to supplement the software. Finally, ScopeNote takes the key words and summary that it pulls from a text and exports them such that they can be printed as a .pdf file. The format in which this is done is then conducive to making flashcards, while also acting as a reference sheet on the topic that a student can always look back to. While the process of making flashcards is both monotonous, the actual use of flashcards can be extremely beneficial to retaining memories. As a result, by automating the prior step, we hope to encourage more students to use flashcards and thus engage with information more actively. How We Built It The Chrome extension is coded in Python and React for the back-end and front-end, respectively. Specifically, we used the diffbot API to pull text from a website and the PyMuPDF library to read text from a .PDF file. Once the text was pulled out, we used the Azure Text Analytics API to pull key phrases that were most important to the piece. Following this, we ran a basic algorithm to determine which keywords or phrases were most prevalent in the text, and linked it to a WordsAPI to provide students with a definition. Phrases with no definition were then appended to the default Wikipedia URL so that students would be forwarded directly to the Wikipedia page on the topic. To process text and identify keywords, we explored other possibilities such as the RAKE algorithm and TF-IDF. While neither was as good as Azure’s API, we realized that by modifying the RAKE algorithm to include longer phrases and by using a database of the most common English words as stopwords, we could generate decently appropriate summary sentences for an article. This was leveraged in forming this functionality. These components were then attached to the React front-end with axios, which performed HTTP requests between React and Flask. More specifically, the URL is sent from tthe front end to the backend, where the text is pulled and processed into JSON objects, which are then sent back to the front end. In React, an emphasis was made on state changes and mapping to update the information in the application, using React lifecycle functionalities to do so. The actual user interface and functionality of the Chrome extension was made in React, where CRUD actions were used so that users can edit and add notes, definitions, and so on to the automated ones. Azure Specifically in terms of the usage of Microsoft Azure, we leveraged Azure’s Text Analytics API to pull keywords from a piece of text. These keywords, which were ordered as they were presented in text, were then compiled into a list of tuples containing each key word/phrase and the quantity of appearances. This was compiled alongside a list of words specifically from key phrases and how many times they showed up. Our mentality was that the most important keywords for a student to know would be the ones that showed up most frequently; thus, we used these compiled lists to determine which key phrases were most important to define. Challenges We Ran Into Our team had little experience transferring information between a React front-end and Python back-end in both directions, making the implementation of axios one of the biggest challenges we faced. Specifically, mapping and working in states was difficult in terms of syntax, and saving json values into the React component states was extremely challenging. However, we worked with mentors and worked through a lot of trial and error to solve this challenge. Another issue that we confronted was the limitations of the Azure API. While the API itself worked well, there was a character limit of around 5000 on each time it was called, and a total of 5000 calls available. Unfortunately, we did not handle this well and tested on long pieces of text from websites. This meant that for a single website, a single run through the program resulted in around 25 calls of the Azure API. Unfortunately, we did not realize this until we hit the limit on our free trial. It was only after our second setup of an account by a different team member did we recognize the importance of efficiently using the limited calls on APIs. What We Learned Our team was composed of both inexperienced and more experienced hackers. For the beginners, both the provided workshops and immediate hands-on applicability were helpful for learning programs such as CSS and React. The novices were also pushed towards technologies they had never used before, such as Figma, and learned to leverage them by doing as opposed to by reading a textbook. This does not mean that the more experienced coders were comfortable during the entire hackathon, though. We learned to work more comfortably with React and Flask, specifically with states in the former framework. We also explored more in terms of connecting the front and back-end of applications through axios, and learned to save information from .json files into the states of React components. Accomplishments that We're Proud Of We are all proud of the new things we learned as part of this hackathon, whether it was an introduction to React or Figma, or learning to save .json information in React states. Furthermore, we are proud of both the functionality and the design of the final product—together, we think we made a well-functioning Chrome extension that does not sacrifice anything visually. What's Next for ScopeNote Moving forward, we hope ScopeNote can be a tool we use in a crunch if we need to synthesize a document or information from a website. As first year students, we’re venturing into a new level of challenge in university, and thus may leverage this program in a pinch. Aside from our personal use, there is a lot of potential for growth with ScopeNote. The most immediate steps that can be taken are to develop a platform for PDFs to be uploadable through the extension itself, as opposed to having them downloaded and opened on a browser. The development of a local database, where students can store past data would also be helpful. In the future, there may also be more work done in terms of machine learning or text-processing algorithms, which in turn opens up a wide range of possibilities. For example, we could explore using ML to recognize graphical representations of data such as bar, pie, and line graphs, and convert them into meaningful, text-based data for students in their notes. Hopefully with improved algorithms, we can also improve upon the keyword and summary sentence selections to better represent the articles from which they’re pulled. With these additions, we believe ScopeNote could feasibly go from a proof-of-concept idea to genuinely applicable, if we chose to continue development in the future. Built With axios azure css diffbot figma flask html python react wordsapi Try it out github.com
ScopeNote
Remote learning got you down? ScopeNote solves just that! It breaks down articles into keywords and summaries, then makes flashcards and a reference sheet so remote learning is better for everyone!
['Jim Wu', 'Amy Li', 'Justin Ye', 'tomjml Liu']
[]
['axios', 'azure', 'css', 'diffbot', 'figma', 'flask', 'html', 'python', 'react', 'wordsapi']
57
9,982
https://devpost.com/software/what-eat
Landing page Cuisines Preferences Result What made you want to build this hack? Problem Statement No matter who you are or where you are, the question “What should we eat?” can be a frustrating one for couples, families, individuals, and groups of friends looking to satisfy their appetite. The question is asked almost every day, and according to a 2017 New York Post survey, respondents on average spend 2 hours and 32 minutes a week negotiating on what type of meal to eat. In fact 87% of couples surveyed said it was a problem. Furthermore, great new discoveries are not always easy to come by, as 61% find it difficult to discover new restaurants. As the economy across Canada reopens following the lockdown measures, more than ever consumers will be looking to rediscover their own backyards. How it works (high level) Enter WhatEat. Based on your eating occasion and preference of cuisine, price, distance, WhatEat randomly generates a restaurant for you to go to, eliminating the ‘choice overload’ associated with selecting a restaurant. How it works from a technical perspective: After only 4 clicks, WhatEat will query search results and filter for preferences such as cuisine type, rating, and price through a Yelp API integration. WhatEat will also identify the location of the user to help find a restaurant within the preferred radius, and will provide the address link through Google Maps. What is your tech stack? The front-end was built using Vue.js with custom styling to represent our brand image. For the backend, the initial plan was to use a Node.js service deployed to an Amazon EC2 instance. However, we decided to simplify the app to the point where a database would not be required. “Serverless” options such as Amazon Lambda were considered. In the end, Autocode seemed to fit our use case well because it was easy to learn, stored our secrets securely, and was overall an incredibly fast way to make application logic available wherever we wanted. What was the most challenging part of the hack? No one likes a project that only works in your local environment. We wanted to build something that we could share with friends and come back to, while always being available for use. Therefore, the main challenge was to have production ready versions of part of our software stack, while fully leveraging the resources provided to us in the Hackathon. There is also the obvious time constraint and multiple points of failure between a user coming to our website and finally getting a recommendation Most importantly, we needed a user interface we would be willing to use ourselves, so a large amount of time was invested on testing the full flow of the application and being nitpicky about the smallest things. By doing this, we were able to build a quick, convenient tool that we can share with everyone. Built With autocode node.js vue Try it out whateat.space
What Eat
A food decision maker to eliminate the 'choice overload' with selecting a place to eat using your personalized preference of cuisine, price, distance, and occasion.
['Kevin Xiao', 'Harry Huang', 'Sharon Wu']
[]
['autocode', 'node.js', 'vue']
58
9,982
https://devpost.com/software/lorax-luring-others-to-retain-our-abode-extensively
Hello and thank you for judging my project. I am listing below two different links and an explanation of what the two different videos are. Due to the time constraints of some hackathons, I have a shorter video for those who require a lower time. As default I will be placing the lower time video up above, but if you have time or your hackathon allows so please go ahead and watch the full video at the link below. Thanks! 3 Minute Video Demo 5 Minute Demo & Presentation For any questions or concerns, please email me at [email protected] Inspiration Resource extraction has tripled since 1970. That leaves us on track to run out of non-renewable resources by 2060. To fight this extremely dangerous issue, I used my app development skills to help everyone support the environment. As a human and a person residing in this environment, I felt that I needed to used my technological development skills in order to help us take care of the environment better, especially in industrial countries such as the United States. In order to do my part in the movement to help sustain the environment. I used the symbolism of the LORAX to name LORAX app; inspired to help the environment. _ side note: when referencing firebase I mean firebase as a whole since two different databases were used; one to upload images and the other to upload data (ex. form data) in realtime. Firestore is the specific realtime database for user data versus firebase storage for image uploading _ Main Features of the App To start out we are prompted with the authentication panel where we are able to either sign in with an existing email or sign up with a new account. Since we are new, we will go ahead and create a new account. After registering we are signed in and now are at the home page of the app. Here I will type in my name, email and password and log in. Now if we go back to firebase authentication, we see a new user pop up over here and a new user is added to firestore with their user associated data such as their points, their user ID, Name and email. Now lets go back to the main app. Here at the home page we can see the various things we can do. Lets start out with the Rewards tab where we can choose rewards depending on the amount of points we have. If we press redeem rewards, it takes us to the rewards tab, where we can choose various coupons from companies and redeem them with the points we have. Since we start out with zero points, we can‘t redeem any rewards right now. Let's go back to the home page. The first three pages I will introduce are apart of the point incentive system for purchasing items that help the environment If we press the view requests button, we are then navigated to a page where we are able to view our requests we have made in the past. These requests are used in order to redeem points from items you have purchased that help support the environment. Here we would we able to view some details and the status of the requests , but since we haven’t submitted any yet, we see there are none upon refreshing. Let’s come back to this page after submitting a request. If we go back, we can now press the request rewards button. By pressing it, we are navigated to a form where we are able to submit details regarding our purchase and an image of proof to ensure the user truly did indeed purchase the item . After pressing submit, this data and image is pushed to firebase’s realtime storage (for picture) and Firestore (other data) which I will show in a moment. Here if we go to firebase, we see a document with the details of our request we submitted and if we go to storage we are able to view the image that we submitted . And here we see the details. Here we can review the details, approve the status and assign points to the user based on their requests. Now let’s go back to the app itself. Now let’s go to the view requests tab again now that we have submitted our request. If we go there, we see our request, the status of the request and other details such as how many points you received if the request was approved, the time, the date and other such details. Now to the Footprint Calculator tab, where you are able to input some details and see the global footprint you have on the environment and its resources based on your house, food and overall lifestyle. Here I will type in some data and see the results. Here its says I would take up 8 earths, if everyone used the same amount of resources as me. The goal of this is to be able to reach only one earth since then Earth and its resources would be able to sustain for a much longer time. We can also share it with our friends to encourage them to do the same. Now to the last tab, is the savings tab. Here we are able to find daily tasks we can simply do to no only save thousands and thousands of dollars but also heavily help sustain and help the environment. * Here we have some things we can do to save in terms of transportation and by clicking on the saving, we are navigated to a website where we are able to view what we can do to achieve these savings and do it ourselves. * This has been the demonstration of the LORAX app and thank you for listening. How I built it For the navigation, I used react native navigation in order to create the authentication navigator and the tab and stack navigators in each of the respective tabs. For the incentive system I used Google Firebase’s Firestore in order to view, add and upload details and images to the cloud for reviewal and data transfer. For authentication, I also used Google Firebase’s Authentication which allowed me to create custom user data such as their user, the points associated with it and the complaints associated with their user ID . Overall, Firebase made it EXTREMELY easy to create a high level application. For this entire application, I used Google Firebase for the backend. For the UI for the tabs such as Request Submitter, Request Viewer I used React-native-base library to create modern looking components which allowed me to create a modern looking application. For the Prize Redemption section and Savings Sections I created the UI from scratch trialing and erroring with different designs and shadow effects to make it look cool. The user react-native-deeplinking to navigate to the specific websites for the savings tab. For the Footprint Calculator I embedded the Global Footprint Network’s Footprint Calculator with my application in this tab to be able to use it for the reference of the user of this app. The website is shown in the tab app and is functional on that UI , similar to the website. I used expo for wifi-application testing, allowing me to develop the app without any wires over the wifi network. For the Request submission tab, I used react-native-base components to create the form UI elements and firebase to upload the data. For the Request Viewer, I used firebase to retrieve and view the data as seen. Challenges I ran into Some last second challenges I ran to was the manipulation of the database on Google Firebase. While creating the video in fact, I realize that some of the parameters were missing and were not being updated properly. I eventually realized that the naming conventions for some of the parameters being updated both in the state and in firebase got mixed up. Another issue I encountered was being able to retrieve the image from firebase. I was able to log the url, however, due to some issues with the state, I wasnt able to get the uri to the image component, and due to lack of time I left that off. Firebase made it very very easy to push, read and upload files after installing their dependencies. Thanks to all the great documentation and other tutorials I was able to effectively implement the rest. What I learned I learned a lot. Prior to this, I had not had experience with data modelling, and creating custom user data points. **However, due to my previous experience with **firebase, and some documentation referencing I was able to see firebase’s built in commands allowing me to query and add specific User ID’s to the the database, allowing me to search for data base on their UIDs. Overall, it was a great experience learning how to model data, using authentication and create custom user data and modify that using google firebase. Theme and How This Helps The Environment Overall, this application used incentives and educates the user about their impact on the environment to better help the environment. Design I created a comprehensive and simple UI to make it easy for users to navigate and understand the purposes of the Application. Additionally, I used previously mentioned utilities in order to create a modern look. What's next for LORAX (Luring Others to Retain our Abode Extensively) I hope to create my own backend in the future , using ML and an AI to classify these images and details to automate the submission process and create my own footprint calculator rather than using the one provided by the global footprint network. Built With apis data-modelling expo-permissions expo.io footprint-calculator google-firebase google-firebase-authentication google-firestore google-storage react-native react-native-base the-global-footprint-network Try it out github.com
LORAX (Luring Others to Retain our Abode Extensively)
Gamifying and rewarding those who help the environment through their actions and lifestyle
['Om Joshi']
['3rd Place', 'Best Design', 'Wolfram Award for Top 30 Hacks']
['apis', 'data-modelling', 'expo-permissions', 'expo.io', 'footprint-calculator', 'google-firebase', 'google-firebase-authentication', 'google-firestore', 'google-storage', 'react-native', 'react-native-base', 'the-global-footprint-network']
59
9,982
https://devpost.com/software/notate
User Homepage Channel Homepage: Users can join live channels Text is transcribed in real time using Google's Speech-to-text API The API is served using Node.js and Express js, making transcription available. Users can save the transcript from live videos to their personal notes. Channel Creation: Users can create their own channels User's have their own note dashboard, where they can view notes from videos as well as upload new notes. Notes: Users can arrange their notes by subjects Note Creation: users can manually create notes in addition to the video transcriptions Inspiration As a group of university students from across North America, COVID-19 has put into perspective the uncertainty and instability that comes with online education. To ease this transition, we were inspired to create Notate — an unparalleled speech-to-text transcription platform backed by the power of Google Cloud’s Machine Learning algorithms. Although our team has come from different walks of life, we easily related to each others’ values for accessibility, equality, and education. What it does Notate is a multi-user web conferencing app which allows for students to create and access various study rooms to virtually interact with others worldwide and revolutionize the way notes are taken. It has the capacity to host up to 50 unique channels each with 100+ attendees so students can get help and advice from a multitude of sources. With the use of ML techniques, it allows for real time text-to-speech transcribing so lectures and conversations are stored and categorized in different ways. Our smart hub system makes studying more effective and efficient as we are able to sort and decipher conversations and extract relevant data which other students can use to learn all in real time. How we built it For the front end, we found an open source gatsby dashboard template to embed our content and features into quickly and efficiently. We used the daily.co video APIs to embed real-time video conferencing into our application, allowing users to actually create and join rooms. For the real time speech-to-text note taker, we made use of Google’s Cloud Speech-to-Text Api, and decided to use Express and Node.js to be able to access the API from our gatsby and react front end. To improve the UI and UX, we used bootstrap throughout our app. Challenges we ran into In the span of 36 hours, perfecting the functionality of a multi-faceted application was a challenge in and of itself. Our obstacles ranged from API complexities to unexpected bugs from various features. Integrating a video streaming API which suited our needs and leveraging Machine Learning techniques to transcribe speech was a new feature which challenged us all. Accomplishments that we're proud of As a team we had successfully integrated all the features we planned out with good functionality while allowing room to scale for the future. Having thought extensively regarding our goal and purpose, it was clear we had to immerse ourselves with new technologies in order to be successful. Creating a video streaming platform was new to all of us and it taught us a lot about API’s and integrating them into modern frontend technologies. At the end we were able to deliver an application which we believe would be an essential tool for all students experiencing remote learning due to COVID-19. What we learned Having access to mentors who were just a click away opened up many doors for us. We were effectively able to learn to integrate a variety of APIs (Google Cloud Speech-to-Text and Daily Co.) into Notate. Furthermore, we were exposed to a myriad of new programming languages, such as Bootstrap, Express, and Gatsby. As university students, we shared the frustration of a lack of web development tools provided in computer science courses. Hack the 6ix helped further our knowledge in front-end programming. We also learned a lot from one another because we all brought different skill sets to the team. Resources like informative workshops, other hackers, and online tutorials truly gave us long-lasting and impactful skills beyond this hackathon. What's next for Notate? As society enters the digital era, it is evident Notate has the potential to grow and expand worldwide for various demographics. The market for education-based video conferencing applications has grown immensely over the past few months, and will likely continue to do so. It is crucial that students can adapt to this sudden change in routine and Notate will be able to mitigate this change and help students continue to prosper. We’d like to add more functionality and integration into Notate. In particular, we’d like to embed the Google Vision API to detect and extract text from user’s physical notes and add them to the user’s database of notes. This would improve one of our primary goals of being the hub for student notes. We also see Notate expanding across platforms, and becoming a mobile app. As well, as the need for Notate grows we plan to create a Notate browser extension - if the Notate extension is turned on, a user can have their Zoom call transcribed in real time and added to their hub of notes on Notate. Built With css daily.co express.js gatsby google-cloud google-web-speech-api html5 javascript machine-learning node.js react Try it out github.com f1378179ca39.ngrok.io
Notate
Notate is a multi-user video calling app which leverages ML algorithms and Google APIs in realtime to enhance remote learning by connecting students worldwide and creating a smart hub for notes.
['Kesojan Premakumar', 'Anusha Dey', 'Kelly Liu']
[]
['css', 'daily.co', 'express.js', 'gatsby', 'google-cloud', 'google-web-speech-api', 'html5', 'javascript', 'machine-learning', 'node.js', 'react']
60
9,982
https://devpost.com/software/neat-mentality
Neat Mentality Home Page Task Manager Study Timer Our Team! Inspiration As recent high school graduates, we experienced the wonders of online school and remote learning. It’s no secret that many students have struggled with this transition and experienced a lack of focus when you’re able to do anything in your house. We wanted to create something that could help students with their remote learning, so we created Neat Mentality, a web application with a task manager and study timer. From first hand experience, we know that students open many tabs and windows when working on projects online, so the task manager is there to save and organize all your links. The additional study timer helps with focus and time management. What it does Neat Mentality is a site featuring a task manager and study timer. The task manager organizes and saves your tasks in the order that you want so one can reference the list while working on large projects. The study timer is modelled after the Pomodoro Technique, a time management method that guides you through a 25 minute work block followed by a 5 minute break. The shortcut buttons set the times for you, but if you don’t want to follow those times, you can input your own time and study away. How we built it We created Neat Mentality with Glitch, an online platform used to create web applications. We used HTML, CSS, and JavaScript to construct and customize the site features, and connected it to its own domain. Challenges we ran into Although everyone on our team had prior coding experience, we weren’t too very familiar with web development, as most of our previous projects have been games. For this hackathon, we wanted to step out of our comfort zones and create something useful rather than another arcade game. Our biggest challenge was actually simply learning how to use HTML and CSS. We bumped into many issues from not understanding how to put together each component of the project to errors with the website domain. Accomplishments that we're proud of We’re really proud we managed to finish a whole site in a day! Considering our lack of experience, we’re really glad that we learned so many new things. This hackathon was a push for us to learn about web development and we overcame many obstacles to complete our website. What we learned We learned almost everything we did today and it was a wild journey but it was worth it! To be specific, we learned how to properly use HTML and CSS for the first time. In addition, it was our first time using our own domain and one of our members spent hours figuring it out. What's next for Neat Mentality We know that our design is just the tip of the iceberg. With more features and a sleeker design, Neat Mentality will become better than ever! We had many ideas that we wanted to implement into our project but didn’t have the time and capability to do so. One thing we initially wanted to create was a tab manager, a web app that helped organize tabs specifically, but that was a bit too complicated for us to get done in a day. Additionally, we were interested in custom colours, such as colour coding tasks or having a dark mode. Built With css glitch html javascript Try it out www.neatmentality.tech github.com
Neat Mentality
Neat Mentality is a web application that helps students work remotely. This app features a task manager and study timer to help you organize your time and work.
['Jasmine Xiong', 'Tina Ge', 'Vyshnavi Rajeevan']
[]
['css', 'glitch', 'html', 'javascript']
61
9,982
https://devpost.com/software/covid-19-v0zilm
Front page user_input form Inspiration Testings are essential to detect and containerize COVID-19 pandemic. However, the need for testing is excessive and the equipment is in shortage around the world. Understand the situation, we provide a solution for this rising issue. What it does COVID-19 detector tentatively predicts the probability of an infected patient based on Chest CT scan images (Computed Tomography scan technology is a popular service at most hospitals). We aim to use this project to detect people with high potential of having COVID-19. Thus, health care providers can approach, test, and deliver supports to these patients faster. The detector has a high precision rate at 91% and gives prediction in only seconds. How we built it The backend was built with Python with Flask Front-end was built with HTML, CSS, javascript and Bootstrap The ML model was trained using Custom Vision AI from Microsoft Azure with a set of data which consists of chest CT scan images of 329 positive COVID-19 patients and 387 negative cases. The web app was deployed using Microsoft Azure Web Service with containerized by Docker Challenges we ran into We had a difficult time to find a good, reliable dataset of positive COVID-19 chest CT scan images. We had problems with deploying to Azure Web server at first because we can not set up the pipeline with GitHub. We also had problems with some stylings with CSS. Accomplishments that we're proud of The model that we trained has a precision rate of 91% and a recall rate of 85.6%. We've successfully deployed our web app to azure web service. The website has every functionality we planned to implement. What we learned How Flask serves 2 static folders How to utilize the Microsoft Azure Custom Vision to quickly train ML model How to deploy a web app to Azure Web Service with Docker What's next for COVID-19 We look forward to improving the data set to get better predictions. Built With azure bootstrap css docker flask html javascript jinja jquery machine-learning pandas python Try it out covid19-detector.azurewebsites.net github.com
COVID-19 detector
A COVID-19 detector based on Chest CT scan images
['Duc Nguyen', 'Minh Tran Nhat']
[]
['azure', 'bootstrap', 'css', 'docker', 'flask', 'html', 'javascript', 'jinja', 'jquery', 'machine-learning', 'pandas', 'python']
62
9,982
https://devpost.com/software/air_pollution_hotspots-qvc4og
Analysis Of Two Concurrent Months Visualization Of Two Concurrent Months Inspiration During recent years an increasing focus has been directed towards the adverse health effects associated with ambient air pollution. Elderly people appear to be particularly susceptible to the adverse effects involving the respiratory and cardiovascular systems, resulting in symptoms, exacerbations of disease and even mortality. No matter where you live, you can be exposed to air pollution. The type and amount of exposure varies depending on your location, the time of day, and even the weather. Exposure to air pollution is higher near pollution sources like busy roadways or wood-burning equipment. Many of our daily activities expose us to higher levels of air pollution. Idling cars, gas-fueled yard equipment, and chemicals we use in our homes all contribute to overall air pollution and expose us to harmful air pollutants. This makes us to be more responsible and work towards Air Pollution Control What it does It identifies air pollution hotspots and analyzing their source trajectories.For instance, air pollution in India is a serious health issue and being a large country by land, India has only 231 autonomous pollution monitoring station on grounds that too are concentrated in urban areas but that doesn’t mean air pollution only concerns urban areas since we are not getting enough data from land we look up up into the space as multiple satellites have been deployed in earth’s orbit for monitoring the atmosphere . So,using the satellite data, I am analyzing and visualizing the air pollution hotspots in more generalized manner. How I built it I used NASA's satellite's raw data available at earthdata.nasa.gov that can be only used by academics and professionals and then extracted the data for the major pollutants such as NO2 ,SO2 and CO(longitude and lattitude wise).I then visualized the data and used clustering to get the major hotspot clusters and then analyzed the source trajectories of the hotspots and then plotted them on the map using geopandas and matplotlib. Challenges I ran into The main challenge was to find the suitable data which is updated very frequently. Accomplishments that I'm proud of I am proud of the fact that I could develop this product that contributes to the society and environment providing us more healthier lives.Also, I could detect major air pollution hotspots in a country in more economical and generalized manner than the current system. And securing a good position in this hackathon will motivate me to complete this project to production and pitch the idea to the respective government bodies and NGOs. What I learned I learnt some better techniques for data processing.This was the first time I worked on demographic shape files to plot the maps in python .And, I learnt that we can solve some major problems in our surroundings using technology and programming practices in a cheaper and universally functional manner. What's next for Air_Pollution_Hotspots I would deploy this project as a web app maintaining a database of countrywise contacts of air pollution management and control bodies and NGOs and therefore send them the updated reports of the major air pollution hotspots of that particular country every week in an automated way . Built With bootstrap css data-visualization html javascript machine-learning python Try it out github.com prezi.com
Air_Pollution_Hotspots
Analysis and identification of air pollution hotspots using satellite data in a more cheaper and universally functional manner unlike local devices setup only in urban areas costing 300$ per device.
['Kiranjot Kaur']
[]
['bootstrap', 'css', 'data-visualization', 'html', 'javascript', 'machine-learning', 'python']
63
9,982
https://devpost.com/software/digiclass-b7m0gk
Chat Room Video Join/Leave Room Discussion Board Video Call Class Dashboard Inspiration While we were back in school, before the COVID-19 pandemic, there was one prominent problem in our classes. While many platforms existed for teacher-to-student and student-to-teacher communication, there was nowhere for students to communicate with each other and help each other out. In fact, some of our classmates tried to solve this problem by creating group chats for classes on apps like Instagram and Snapchat. However, a large concern with such a solution is that these social media apps are inherently distracting, we didn't join these group chats myself because we were concerned that they would be detrimental to my education; chats were often going off-topic, or even inappropriate content being posted. A platform encouraging classmates to help each other out would greatly enrich the learning experience for everyone. As schools began to close down due to COVID-19, this problem became even more pertinent as it prevented struggling students from getting the help they needed through after-school study sessions, peer tutoring sessions, and the like. Even though these meet-ups might still be possible, they are a lot harder to organize and execute, causing many struggling students to be left with insufficient help and few ways to get it. What it does To solve these issues, we created DigiClass, a platform that encourages student-to-student interaction in an optimal and incentivizing manner. Currently, there are three main parts to DigiClass, a discussion page, a question page, and a room page. In the discussion page, students can discuss with one another in real-time for short and simple things, such as the textbook pages for homework, or if there was a test the next day. For more detailed questions relating to the current subject or lesson, students would use the questions panel. It's based on a similar system to StackExchange, where students can ask questions and the most upvoted ones bubble to the top. This way, teachers and other students are able to get a good overview of the most common questions about a certain subject or lesson, and instead of the inefficient method of having the teacher answer every student's question separately through email, every student can benefit from this question page and teachers also don't need to repeatedly answer individual questions. In addition, we understand that teachers don't always have enough time or resources to help every student and answer every question. That's why we made it possible for students to also answer one another's questions. Answers to questions can also be upvoted, and each question can also be approved by the teacher for official confirmation. This way, instead of teachers spending their time writing out answers to every question, their workload is now reduced to simply reviewing the answers and correcting them as necessary. By answering other students' questions, students can gain "reputation", which can be used to give them higher priorities in other features for the app. For example, when students send teachers private questions or messages, instead of being sorted by the time sent, they are instead sorted based on the reputation of the students. This way, students will have an incentive to gain reputation as they will be able to receive quicker and more prioritized responses from teachers to their questions and emails. The last page is a room page, where students can connect with one another through video chat. Instead of forcing students to download other distracting applications such as Instagram or Discord for video calling, Digiclass comes with a video call feature for students to directly connect to one another. This video calling feature can be used in a variety of ways, ranging from one-on-one help, a quiet virtual study session, or as a place to communicate with one another about their results for a quiz or a test, This page also provides more than just a place to video chat, as it also provides integrations with other educational apps. Here, you can see that we've integrated with Tabulo, a digital whiteboard that greatly facilitates visual teaching, as you can draw from your phone onto the computer instead of struggling with a mouse. We made it so that when you upload a picture through DigiClass and open it via Tabulo, it will automatically create a room for you with the picture you used in the background so that you can immediately share your room with others and start helping or receiving help with minimal friction. This could be helpful in a variety of situations—not just when you have a worksheet that you need help on. By using this integration, you can allow others to write on the same worksheet as you, so they can correct your steps and also write out the correct steps in real-time. DigiClass also features a mobile application so that you won't miss a message, and can chat with friends on the go. How we built it We used a lot of different technologies to turn DigiClass into reality. The front-end of the application is built with Vue.js and Quasar Framework, which gives the app a standard and beautiful layout with Material Design. The backend server is a Node.js server using Express and uses PostgreSQL for accounts, and Redis and Socket.io for real-time publishing and subscriptions to the client. The video calling feature is built with WebRTC, PeerJS and vanilla JavaScript. Challenges we ran into As we were using somewhat uncommon technologies, we had to install PostgreSQL and Redis, which wasn't as easy as we expected, especially on Windows. We spent a lot of time debugging and tracking down errors related to the installation of these tools. In addition, a lot of these technologies were new, such as WebRTC and the combination of Redis + Socket.io, and so we spent a lot more time debugging these unfamiliar and uncommon bugs we were encountering. Accomplishments that we're proud of As we started approaching the submission deadline, we began to pick up the pace and development was quickly underway. However, as we sped up the process, many more bugs came up and cross-OS compatibility was becoming more and more of an issue. Thankfully, we were able to solve them very quickly due to our experience of working with previous bugs and being able to resolve them quickly. We are also proud of how we were able to troubleshoot, integrate and learn so many different tools in such a short amount of time. What's next for DigiClass Of course, as all of this was built in only 36 hours, we believe that DigiClass has a ton of potential in the future to be a standard app in all virtual and physical classrooms, as it can unlock the hidden value in providing an easily accessible platform for students to assist one another. Built With javascript peerjs postgresql quasar redis socket.io vue.js webrtc Try it out github.com github.com a-student-to-student.space
DigiClass
A classroom application with many essential features integrated
['Kevin Qu', 'Leon Si', 'Kevin Xu']
[]
['javascript', 'peerjs', 'postgresql', 'quasar', 'redis', 'socket.io', 'vue.js', 'webrtc']
64
9,982
https://devpost.com/software/graphui
GraphUI - Get your idea working in seconds. Builder - Start your journey with this simple but powerful UI creation tool. Project - View your entire app's structure at a glance. Editor - Fine tune each component to your liking. Preview - See exactly what your app is going to look like- updated live as you edit. Queries - Easily build your queries by typing exactly what you want to get via intelligent suggestions. Run and Debug - Build and run your apps, see them in action! Inspiration As a beginner, entering into the scary world of app development can be a frighteningly difficult task. Each and every tool you use has its own quirks and complications that can throw you for a loop. Rarely do you have a visual representation of what is going on, and when you do it's coupled with complicated concepts like "constraints." Your final project is often broken into "logic" and "visual" sections, and it's often left unclear how these two interact. When I was a beginner, I struggled to figure out simple things in frameworks like UIKit, Android XML and Windows Forms. It was a real uphill battle. I had a secret word for these tools, a "comprimisolution-" a tool that could solve your problem but you'd have to give up something nice, like intuitiveness, easy of use, no code, etc. Recently, I was teaching a friend a tool called Vue.js, and he ran into the same issues I had when I was a beginner. After that session, I began thinking about ways we could streamline and simplify the process of building an app. What it does GraphUI is the solution to this problem. Its a strong "all-in-one" app development tool that helps your app get out of your mind and onto your device. It was designed with intuitiveness and ease of use in mind to help beginners get comfortable with the tool. To achieve this, GraphUI sports a streamlined but powerful UI builder that reflects how your app will look live. Usually, when you want to bring functionality to your app (via the internet), most frameworks will just give up and force you to go back to your programming language. GraphUI brings life to your app in a unique way: super easy to use queries. Thanks to technologies like GraphQL, its as simple as typing in exactly what you want to get ("I would like the user 's email please") and just dragging it wherever you want in your app. GraphUI will even suggest helpful queries as you are building your app to help you get your work done that much faster. Using GraphUI, you can create your app in minutes, completely working, without writing a single line of code. How I built it I used Vue.js and Tailwind to create most of the builder interface. It was a lot of fun! Tailwind makes designing everything so fast and easy, and you end up with this pretty clean look at the end. GraphQL has an interesting property that made lots of the helpful parts of GraphUI possible- introspection. This means, theoretically, a user could enter some website like papascooking.com/graphql and any program could instantly recognize all the possible requests it can make to that website (ex. makePizza, serveDinner). Through introspection, I can make conclusions about what the user is trying to get, and build an actual GraphQL query representing exactly what they want. It also lets me give them some cool suggestions and information about what they're querying. Challenges I ran into One of the hardest problems I had to tackle was allowing the user to create lists of items from queries. I wanted the user to be able to ask for a list of "books," and have the app create different elements for each book containing each their title, picture, author, etc. This turned out to be much more difficult than I initially thought it would be... I had to take about half an hour planning on paper, and I eventually came up with an infallible plan to make it work! Now it works! It was a real joy to see that get crossed off my list. Accomplishments that I'm proud of I had no clue I could make a whole UI editor and query builder within 36 hours. However, as time went by, I realized I got a lot more done than I expected. I'm proud about the number of different configuration options, and how easy they were to change. I'm really proud about how polished the whole experience feels, I am almost confident this could go straight into production! What I learned I pushed out of my comfort zone for this project and forced myself to use tools like Vuex and GraphQL. I also got more comfortable with the inner working of Vue.js, a framework I wasn't completely confident with going into the project. What's next for GraphUI I'm planning to add an option to export your program to some text you can share with your friends (I hope I finish this before the hackathon is over). Creating and sharing something cool with my friends is one of the core reasons I keep programming, and I want anyone who happens to visit the site to have that experience too. I think it would be interesting to see what people would create with a creation like this. I absolutely dislike how most modern app frameworks force you into this 50% storyboard 50% code relationship with your project. If you want to develop for web, your project will be split into at least 3 different languages (HTML, CSS, JS, maybe PHP, maybe SQL and a bunch of frameworks if you want to meet your deadline) and this is completely unfair to any beginner. Many of my recent projects have been working towards addressing this "split project" issue, and I'm really happy to add GraphQL to that league. Built With graphql tailwindcss vuejs Try it out desgroup.me github.com
GraphUI
The creation tool that takes the complicated parts out of your dream project.
['Taylor Whatley']
[]
['graphql', 'tailwindcss', 'vuejs']
65
9,982
https://devpost.com/software/spaceduck
Inspiration I took inspiration from an MLH rubber ducky sticker that I found on my desk. It's better to make something rather than nothing, so why not make a cool looking website? There's no user authentification or statistical analysis to it, it's just five rubber duckies vibing out in the neverending vacuum of space. Wouldn't you want to do the same? What it does Sits there and looks pretty (cute). How I built it + challenges + accomplishments I drew everything on my own and used some nifty HTML/CSS to put it all together in the end. It was a pretty simple idea, so the execution reflected it. There were next to no challenges/obstacles save for the obligatory CSS frustration when something doesn't work exactly like you want to. In the end, however, I'm proud that I was able to handle everything from the design to the execution with little to no hassle and finish up within only a few hours. All in all, it was pretty cool. Built With css3 html5 Try it out rubberduckiesin.space
space(duck)x
Astronaut duckies are stuck in space when their space station break from unknown causes - witness their plight as they each try their (personal) best to get things back on track!
['Yuxi Qin']
[]
['css3', 'html5']
66
9,982
https://devpost.com/software/hello-xac6dl
Home Page Browse Categories Available Products Custom Grocery Lists Routing Recommendations Map Inspiration The Canadian economy has taken a tumble since the outbreak of COVID-19 in early March of this year. While these circumstances have affected all businesses from operating and staying open, local businesses have been the most affected. Roughly four in five Canadians are worried that their favourite local businesses will shut down as a result of the COVID-19 pandemic as found in a poll conducted by the Canadian Federation of Independent Business (CFIB) in mid-August 2020. The poll also concluded that roughly 158,000 small businesses will close by the end of the pandemic as the business owners face lower sales, regulations on location capacity, and increased spending on personal protective equipment. Statistics Canada found that small businesses were more likely to report revenues from their first quarter in 2020 were down by 20% from their first quarter 2019 when compared to businesses with more than 100 employees. Closer to home, Charlie Lin, owner of the Sky Dragon Restaurant in Toronto stated, “This virus really hurt our business. It cut down our business almost 50% to 60%.” It is evident that local businesses need support from their community in order to make it through these circumstances. 95% of Canadians feel that supporting local businesses is the key to keeping the economy healthy as per the poll from the CFIB, and we decided to make it easier by creating a mobile application to benefit the local businesses we depend on. What it does Local+ is a mobile application that can be used by Canadians to support our local businesses. Our expected features included the ability for users to search for common grocery items using the catalogue feature and add them to a grocery list. Then, users can select which of their created lists they wish to shop for and choose between the quickest route, the route with the cheapest products, or the route with most products from their selected list in one store. From here, users are given an optimized route through a Google Maps API to take to get to these store locations and shop! Unfortunately, we were unable to fully accomplish the feature to create a list or map out the routes desired using Google Maps. How we built it Local+ is built in the Flutter Framework using Dart. The backend utilizes AutoCode for Airtable APIs, Google Maps API, and a Geolocator API for features such as searching available products, creating your grocery list, and mapping a route to pick up all the items on your list. Challenges we ran into One of the major challenges we faced was implementing the Google Maps API. It was difficult because we had a hard time incorporating aspects of the open-source Google Maps Widget into our application in a way that would work seamlessly within our application. Additionally, we only had a little knowledge of Dart prior to the hackathon, which led to a lot of syntax-related learning to resolve several bugs. Lastly, we were not able to fully execute all of our initial planned features so we had to work around that to find alternatives. Accomplishments that we are proud of Looking at our challenges is a perfect lead into our most notable accomplishments, which starts with the completion of our application. There were several bugs that we faced while creating the app, so we were very proud once almost everything came together at the end. Additionally, the Airtable databases worked exactly as we needed, which further enhanced the capabilities of our application. Local+ is the first project we have completed that makes use of these features and capabilities, and we are truly proud of the final product. What we learned Without a doubt, we learned a tremendous amount about app development in Flutter and its API integration. This allowed us to create a functional application with references to backend Airtable databases and Google Maps, both APIs we had no prior experience within any language. We also learned the importance of planning after we brainstormed many ideas at the beginning but did not know how to move forward. Taking the time to make a rough schedule helped everyone stay on track towards our set goals and deadlines. What's next for Local+ Local+ currently does not have access to the inventory of nearby local businesses including information about what products are available and their price. The next steps for Local+ include establishing a method to fetch data from the store’s inventory and link it securely to the app. This way, users have real-time data about the businesses near them. We also want to allow users to create their unique lists and add to them as necessary, then calculate the route options. Lastly, we would like to further develop the Google Maps integration for a more user-friendly experience in the application. Built With airtable autocode dart flutter framework geolocator google-maps Try it out github.com
Local+
With 82% of Canadians worried that their favourite local businesses will close down as a result of the pandemic, Local+ is an app to plan grocery spending while supporting local businesses.
['Shaili Kadakia', 'Adeit Dalal', 'Kush Kansara', 'Joshua Johnson']
[]
['airtable', 'autocode', 'dart', 'flutter', 'framework', 'geolocator', 'google-maps']
67
9,982
https://devpost.com/software/brain-controlled-cs-go
Built what is to my knowledge the most complex brain-controlled video game to be created. You can play a first-person shooter just using brain signals. You can move around, aim and shoot. Built With openbci python steam
Brain Controlled CS:GO
Playing a First Person Shooter Game (like Call of Duty) only using signals from your brain
['Mayank Jain', 'Samyak Jain']
[]
['openbci', 'python', 'steam']
68
9,982
https://devpost.com/software/insur-al
Cover Image Our Focus Tech Stack Sneak Peek Our Team Inspiration After conducting extensive internal and external market research, our team discovered that customer experience is one of the biggest challenges the insurance industry faces. With the rapid increase in digitalization, customers are seeking faster and higher quality services where they can find answers, personalize their products and manage their policies instantly online. What it does Insur-AI is a fully functional chatbot that mimics the role of an insurance broker through human-like conversation and provides an accurate insurance quote within minutes! You can check out a working version of our website on: insur-AI.tech How we built it We used ReactJS , Bootstrap along with some basic HTML & CSS for our project! Some of the design elements were created using Photoshop and Canva. . Accomplishments that we're proud of Creation of a full personalized report of an Intact insurance premium estimate including graphical analysis of price, ways to reduce insurance premium costs, in a matter of minutes! What's next for Insur-Al One of the things we could work on is the integration of Insur-AI into https://www.intact.ca/ , so prospective customers can have a quick and easy way to get a home insurance quote! Moreover, the idea of a chatbot can be expanded into other kinds of insurance as well, allowing insurance companies to reach a broader customer base. NOTE: There have been some domain issues due to configuration errors. If insur-AI.tech does not work, please try a (slightly) older copy here: aryamans.me/insur-AI https://www.youtube.com/watch?v=YEU5eBp_Um4&feature=youtu.be Built With bootstrap css html react Try it out insur-AI.tech github.com
Insur-Al
A chatbot that provides Intact insurance quotes in minutes!
['Aryaman Singh', 'Sarah Chun', 'Szwina Yip', 'Aadar Gupta']
[]
['bootstrap', 'css', 'html', 'react']
69
9,982
https://devpost.com/software/my-journal-cbt
Inspiration Due to COVID, many employees have been left to work online. Being home 24/7 can come with struggles that are unique to each employee, such as struggles with relationships at home financial struggles mental health and many more As a company/business/team who cares for the well-being of all of their employees, it is difficult to tackle such a wide range of problems. What it does Since it may not be feasible to provide help for each source of the problem, the first step that a company/business/team can make is to help relieve the resulting negative perspectives. By helping them, each employee will be more efficient and healthy. My Journal is a Slack App that allows businesses to provide positive perspectives for their employees. Cognitive Behavioral Therapy has been well researched and is known for its effectiveness in solving any negative perceptions about one's situations or self. CBT is well known for its thought journal exercises which are a series of prompts that lead one to understand their feelings better. By following the questions, they are able to realize their negative thoughts and take action to replace them with positive thoughts. My Journal incorporates an easy and simple format for employees to be able to start a simplified version of CBT. Since the simple act of answering the questions can provide therapy and to allow a safe environment for honesty, the answers are not stored. With only 3 questions, employees are able to summarize both happy and stressful situations while working remotely Describe the situation How I felt at that time What positive characteristics did you learn about yourself through this situation? If the user is able to identify positive characteristics about themselves, the positive characteristic will be stored in a private chat so that they can be reminded in the future. If they list a negative characteristic, My Journal sends a quote for encouragement or self-love to the private chat. The determination of a positive and negative response was done through Microsoft Azure, Cognitive Services, Text Analytics Sentiment. By continuously journalizing their situations, both good and bad, they will be able to keep a log of all of their positive characteristics and words of encouragement. How I built it This is built using Autocode. It uses the slack API to gather and send information and uses Microsoft Azure Cognitive Services, more specifically the sentiment capability from text analytics to determine the positive and negative comments that a user makes in their answer. Challenges I ran into The challenge was to be able to create an easy and simple questionnaire that could be completed quickly. Since the original CBT worksheet includes at least 8 questions, it was difficult to narrow down and combine the questions while still being effective. Accomplishments that I'm proud of First time using Autocode First time using Microsoft Azure First time incorporating machine learning (Text Analytics) Used Slack API and Figma for icon design Short and easy questionnaire so that users won't be intimidated and can use daily Made the app completely anonymous with no storage of specific incidents (as the information is personal and private) What I learned I learned how to use Autocode and Microsoft Azure for machine learning. I also educated myself about Cognitive Behavioral Therapy, how it is effective and how it is beneficial for everyone. What's next for My Journal CBT machine learning to identify when the user may need to be notified of nearby help centers or help phone lines create a larger amount of quotes, more personalized allow for integrations with other platforms extend to mobile platforms, for individual use Built With autocode azure figma javascript microsoft slack-api Try it out autocode.com github.com autocode.com
My Journal CBT
My Journal is a Cognitive Behavioral Therapy derived Slack Application that businesses can provide remotely for their employees.
['Haeim Lee']
[]
['autocode', 'azure', 'figma', 'javascript', 'microsoft', 'slack-api']
70
9,982
https://devpost.com/software/ifridge
Welcome page Register Page that connects to Firebase Authentication Sign In Page connected to Firebase Authentication Home Page of App Adding food that the User has to the fridge which connects to the Firestore Cloud List of the User's food items from Firestore cloud Shopping Cart (to be finished) Inspiration Our inspiration came from our first year in University, as we all lived without our parents for the first time. We had to cook and buy groceries for ourselves while trying to manage the school on top of that. More often than not, we found that the food in our fridge or pantries would go bad, while we spent money on food from fast-food restaurants. In the end, we were all eating unhealthy food while spending a ton of money and wasting way too much food. What it does iFridge helps you keep track of all the food you have at home. It has a built-in database of expiration dates of certain foods. Each food has a "type of food", "days to expire", "quantity", and "name" attribute. This helps the user sort their fridge items based on what they are looking for. It is also able to find recipes that match your ingredients or the foods that will expire first. It has a shopping list where you can see the food you have in a horizontal scroll. The vertical scroll on the page will show what you need to buy in a checklist format. The shopping list feature helps the user while they are shopping for groceries. No more wondering whether or not you have the ingredients for a recipe you just searched up, everything is all in one place. When the user checks a food off the list, it will ask for the quantity of the food and input it automatically into your fridge. Lastly, our input has a camera feature that allows the user scan their food into their fridge. The user can manually input it as well, however, we thought that providing a scanning function would be better. How we built it We built our project using flutter dart. We built-in login authentication with the use of Firebase Authentication and connected each user's food to the Cloud Firestore database. We used google photo API to take pictures for our input and scan the items in the photo into the app. Challenges we ran into A challenge we ran into was working with dart streams specifically so that the stream only read the current user data and added only to the current user's database. Also learning about the different Widgets, Event Loops, Futures and Async that's unique to flutter and that are new concepts was challenging but lots of fun! Another challenge we ran into was keeping track of whether the user was logged in or not. Depending on if there is an account active, the app must display different widgets to accommodate the needs of the user. This required the use of Streams to track the activity of the user. We weren't familiar with git either. So, in the beginning, a lot of work was lost because of merging problems. Accomplishments that we're proud of We are so proud to have a physical app that allows users create accounts and input data. This was our first time using databases (we never heard of firebase before today) and our first time using flutter. We’ve never even used github before to push and pull files. The google photo API was an enormous challenge as this was also a first for us. What we learned We learned a lot about flutter dart and how it works, how to implement google photo APIs, and how to access and rewrite information in a database. What's next for iFridge There are many features that we want to implement. This includes a healthy eating tracker that helps the user analyze what food categories they need more of. Eventually, the recipes can also cater to the likes and dislikes of the user. We also want to implement a feature that allows the user to add all the ingredients they need (ones that aren't already in their fridge) into their shopping cart. Overall we want to make our app user friendly. We don't want to over-complicate the environment, however, we want our design to be efficient and accomplish any needs of the user. Built With android-studio dart firebase flutter google-cloud-sql google-photos-api visualstudiocode Try it out github.com
iFridge
A modern solution for food waste, money and time saving, and delicious meals.
['Tony Huang', 'Mer Zhang', 'George Zhang', 'YiJie (Jackie) Zhu']
[]
['android-studio', 'dart', 'firebase', 'flutter', 'google-cloud-sql', 'google-photos-api', 'visualstudiocode']
71
9,982
https://devpost.com/software/discussai-q2bnzr
Home Page Video Conferencing #1: Face Share Video Conferencing #2: Screen Share Searchbox (queries for information) Searchbox Result (retrieves information from Microsoft Azure Storage with the image link) Microsoft Azure OCR (analyzing text images and storing keywords/info) Django API Framework Postman Heroku Microsoft Azure Storage (storing edited and non-edited images) What inspired us: The pandemic has changed the university norm to being primarily all online courses, increasing our usage and dependency on textbooks and course notes. Since we are all computer science students, we have many math courses with several definitions and theorems to memorize. When listening to a professor’s lecture, we often forget certain theorems that are being referred to. With discussAI, we are easily able to query the postgresql database with a command and receive an image from the textbook explaining what the definition/theorem is. Thus, we decided to use our knowledge with machine learning libraries to filter out these pieces of information. We believe that our program’s concept can be applied to other fields, outside of education. For instance, business meetings or training sessions can utilize these tools to effectively summarize long manuals and to search for keywords. What we learned: We had a lot of fun building this application since we were new to using Microsoft Azure applications. We learned how to integrate machine learning libraries such as OCR and sklearn for processing our information, and we deepened our knowledge in frontend (Angular.js) and backend(Django and Postgres). How we built it: We built our web application’s frontend using Angular.js to build our components and Agora.io to allow video conferencing. On our backend, we used Django and Postgresql for handling API requests from our frontend. We also used several Python libraries to convert the pdf file to png images, utilize Azure OCR to analyze these text images, apply the sklearn library to analyze the individual text, and finally crop the images to return specific snippets of definitions/theorems. Challenges we faced: The most challenging part was deciding our ML algorithm to derive specific image snippets from lengthy textbooks. Some other challenges we faced varies from importing images from Azure Storage to positioning CSS components. Nevertheless, the learning experience was amazing with the help of mentors, and we hope to participate again in the future! Built With agora.io angular.js azure django postgresql python sklearn Try it out github.com github.com
discussAI
DiscussAI is a Video Conferencing Tool with ML that allows users to retrieve definitions and theorems based off of course textbooks or course notes.
['Felix He', 'Rolf Li', 'Richard Chen', 'Dennis Bae']
[]
['agora.io', 'angular.js', 'azure', 'django', 'postgresql', 'python', 'sklearn']
72
9,982
https://devpost.com/software/don-t-sit-be-fit
Don't Sit Be Fit Home Page Add new workouts Explore user profiles Inspiration Due to gyms closing down due to Covid-19, it’s been increasingly harder for everyone to stay fit and maintain a healthy lifestyle, impacting both productivity and personal well-being. We wanted to build an easily accessible web application that allows users to view, track, and share their workout routines. An online platform where users can share and try new workouts, Don’t Sit Be Fit brings in the fun in training. 30 minutes of activity per day has never been easier. What it does Users can compete to see who has the most popular workout routines as well as who completed the most number of exercises. In addition, they can discover other users and find their virtual workout buddy. Whether it’s through competition or camaraderie, motivation is just around the corner. How I built it We used Firebase to host our application and a React.js, HTML, CSS techstack to power our web application. Challenges I ran into and what I learned from them Through this hackathon, we took the opportunity to explore new tools and technologies including React.js to power our web application. As we were all relatively new to React, there was a learning curve we had to overcome in order to successfully build Don’t Sit Be Fit. While it was challenging to self-learn React and build an end-to-end application all within 36 hours, it was really rewarding to see it all come together in our finished product. Covid-19 also presented us with a unique situation to collaborate on the hackathon remotely. This presented some challenges in the coordination of our project but also allowed us to be more flexible and innovative in the ways we worked together to design and build our project. What's Next for Don't Sit Be Fit We would love to be able to set up more functionalities for Don’t Sit Be Fit including allowing users to meet with each other virtually and workout together. Built With css firebase html javascript react Try it out dontsitbefit.web.app
Don't Sit Be Fit
Your Personal Fitness Assistant
['Amy Chen', 'Jennifer Liang', 'Henry Tang']
[]
['css', 'firebase', 'html', 'javascript', 'react']
73
9,982
https://devpost.com/software/pool-sampler
Logo California Prediction California Prediction Neural Network Pool Sampling Checkout our site at: https://rapid-processor.herokuapp.com/ Our slideshow at: https://www.beautiful.ai/player/-MFQZr8ue7Jp0jHtY16q Inspiration COVID-19 has significantly impacted everyone's lives and we are all eager to return to the pre-pandemic lifestyle. One of the most effective ways of stopping the spread is through mass testing: identifying and isolating those who are infected. However, in some of the hardest-hit regions around the world, the bottleneck of COVID-19 testing is often not gathering samples, but processing them. In certain areas of the US for example, samples can take, on average, up to 2 weeks to process.[1] On a personal level, that means individuals with mild or no symptoms could be going to public spaces like beaches and parks, and spreading the virus for two full weeks unknowingly. And for those that have a weak immune system, they could face hospitalization or even death. We hope to reduce the processing time, lower the spread of this deadly virus, and prevent death. A technique commonly used to tackle a problem of this kind is pool sampling. In essence, instead of testing one sample at a time, samples are being tested in batches. Individual samples in a certain batch would only be retested if the given batch return positive (at least one positive COVID-19 case is present in the batch). This technique has the potential to be very effective, but only at the most optimized batch size. What it does Our application predicts the number of active COVID-19 cases, tested cases, thus giving us an active rate (active cases divided by tested cases), calculates the most optimal batch sizes for pool sampling. How we built it Our application fetches data from Johns Hopkin’s University’s (JHU) COVID-19 database, which hosts real-time data from around the world. With available data provided by JHU, our application is pertinent everywhere. With the provided data, our well-trained recurrent neural network accurately predicts the active rate of the coming days based on past data. Then we determine the optimized batch size for the the near future in each region to achieve the fastest processing time. The frontend is built on React in JavaScript and the backend is built with flask in Python. Our predicted COVID-19 cases, tested cases, and active rates, which we use to produce the optimal batch sizes, were outputted using machine learning in Keras (TensorFlow). We used a 14-day window to predict the next week’s results and iterated the process to predict the next week’s results. We use mean/max preprocessing on our variables. Our network was a recurrent neural network with two LSTM layers with 16 recurrent units each, and one dense layer with 8 units, followed by a dense linear regression output. We used tanh and ReLU activations. Training was done stochastically on given data with early stopping using an Adam optimizer and a learning rate of 5e-4. Each day, we take newly uploaded data from Johns Hopkins University and perform online learning, updating our model automatically. We then obtain the daily active rate from our algorithm and output the optimal batch sizes. Challenges we ran into We faced two major challenges during this hackathon. Initially, we planned to build our app on Azure. We were able to deploy it successfully with a continuous integration pipeline but were unable to display the user interface. The second challenge we faced is with training neural networks. We had to rely heavily on local computers to train complex networks and this was a time-consuming process during our development. Accomplishments that we're proud of We are proud that we are able to make a fully functional web application with a sophisticated machine learning integrated backend with merely three participants. Furthermore, our app will be able to influence hundreds of millions of people and has the potential to make a significant impact on our fight against COVID-19. What we learned We have learned how to develop and integrate a frontend and back end with a complex machine learning module in an application. We were able to share our existing computer science concepts among us. Each of us brought to the table a unique set of skills that made this project possible. What's next for Pool Sampler We will integrate more cloud services and host our application on platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud services to fully harness the power of cloud computing. We also look forward to collaborating with laboratories and local governments to reduce their testing time and save lives. [1] https://www.mercurynews.com/2020/07/29/coronavirus-why-your-covid-19-test-results-are-so-delayed/ Built With apexcharts git gitpython javascript keras node.js numpy pandas python react tensorflow Try it out github.com
Intellibatch_HT62020
Mass testing is critical in stopping the spread of COVID 19, but the current testing system is heavily bottlenecked by its processing times. With AI and pool sampling, we are able to reduce it by >70%
['Steven Feng', 'Chuyun Shen', 'Tao Zhang']
[]
['apexcharts', 'git', 'gitpython', 'javascript', 'keras', 'node.js', 'numpy', 'pandas', 'python', 'react', 'tensorflow']
74
9,982
https://devpost.com/software/euphoric-lyric
Inspiration Music! Have you ever heard a tune and thought, this would be the perfect melody for a song about ____? We wanted to create a unique AI model that would write the words to our music, testing the creativity of artificial intelligence. What it does Euphoric lyric will take your music and, using its vast knowledge of the songs written and sung by humans, generate the lyrics that will match the overall vibe , rhythm , and essence of your music. How we built it We combined pre-existing popular AI models such as speech-recognition and enhanced it with our own layers to create our custom model. Challenges we ran into One of the main challenges was cleaning and processing the data available to us online. This was crucial to training our model, and was one of the first biggest hurdles we had to overcome. Built With audio2numpy dali glove keras numpy python tensorflow Try it out github.com
Euphoric Lyric
Euphoric Lyric will be the Bernie Taupin to your Elton John - let it write the perfect lyrics for your melody.
['Selena Liu', 'Stephen Yang', 'Himanish Jindal', 'Isha Sharma']
[]
['audio2numpy', 'dali', 'glove', 'keras', 'numpy', 'python', 'tensorflow']
75
9,982
https://devpost.com/software/localszn
The Problem LocalSzn: An Introduction Our Solution Web App Main Page Web App Query Search Function Web App About Page Our Data Stack What's Next? Inspiration A lot of us aspire to eat healthy, find great deals, and stay eco-friendly at the same time. Unfortunately, it can be difficult to find a balance of the three; especially living in modern cities where our food can travel thousands of miles just to get to our grocery stores. We often don't have time to think about where our food is actually coming from, much less spend time scouting the grocery aisles trying to figure out what's in season, or find the best option for tomatoes or peaches. LocalSzn is here to help you make more eco-friendly decisions when choosing your produce; helping you save money, support your local economy, and be more mindful of what you eat along the way. We believe it's important to support our local food growers, local economy, and our environment. LocalSzn brings produce sourced from nearby to your attention; helping reduce your ecological footprint by making it easier for you to avoid buying produce sourced from far away whenever possible. Produce from nearby can be tastier, fresher, and is ultimately more ecologically and economically sustainable since you're helping local farmers and your local economy. We want to help you be more mindful where your food is coming from. While it might be easy to pick up the first bag of apples you see on the shelf, we believe it's important to help you find more sustainable alternatives. If the same kind of apple (or another type of apple) you're looking for is in season from a nearby province instead of all the way from California or Spain, we want to keep you in the loop. Helping keep you aware of what local food is in season helps you save money along the way. Keeping track of growing seasons and trying to spot a great deal can be almost impossible to do when you're busy with everything else life has to offer. With LocalSzn, we also keep track of produce prices throughout the year and help you keep an eye out for what is most likely to be cheaper, locally sourced, and in season. What it does LocalSzn's three main features: Identifies locally sourced produce items in season Identifies produce items most likely to be relatively cheaper than usual Easy user queries to help you find locally sourced, in-season, and wallet-friendly alternatives for your go-to produce item. Identifies locally sourced produce items in season By specifying the province where you do the bulk of your produce shopping, we tailor your feed to help you find produce that's both in season, and sourced from your province or nearby provinces - complete with estimated price information to make your budget-making easier. Identifies produce items most likely to be relatively cheaper than usual By analyzing trends from historical price data, we give you a top-down picture of what to expect price-wise when you walk into a supermarket and look for what we've suggested. We'll tell you what's most likely to be cheapest right now based on previous and current price data, and spot great produce deals for you - even if you don't have a clue what you'd usually have to pay for that mango! Easy user queries to help you find locally sourced, in-season, and wallet-friendly alternatives for your go-to produce item. If you want to look for that one vegetable that you need for that one recipe you really want to make this weekend, but don't know if it's in season or not, don't fret! Type the name of the item into our search bar, and we'll give you the predicted prices, seasonality status, and availability from a local source for any variants we can find of that item - helping you worry less about what you want to buy, and more about how you're actually going to go about cooking that recipe. How we built it Data sources: LocalSzn uses publicly available data covering weekly wholesale market prices across Canada collected by Agriculture and Agri-Food Canada. This dataset is updated weekly, and entries are updated in our local database through the Open.canada.ca API. Additionally, we used other Open.canada.ca datasets such as the Annual Crop Inventory and Forecast Yield of Major Crops as inspiration for exploring our primary dataset for implementation in LocalSzn. Front End: HTML5 and CSS3 were used to design the layout of the website. The search bar and the list of produce web page were implemented through the use of javascript. Back End: Our backend code is a combination of Python, Firebase, and the Firestore library. For Python, we made extensive use of the Pandas, and Pycountry libraries to process the data from the original dataset and manipulate it to extract the information we needed. Puttin' it all together: We brought the whole project together through a mix of Firebase and GitHub. Google Firebase is an incredibly powerful platform for quickly developing and deploying applications. We utilized Cloud Firestore as our NoSQL database to efficiently store and query the weekly wholesale market prices for all commodities. Our app retrieves only the relevant results to offer a smooth end-user experience. Firebase also provides cloud hosting, so that deploying the latest version of the React app is only a command away. We also utilized Google Cloud Functions to serve a supplementary Python script for determining the commodities that are currently in season. Challenges we ran into Front End: In the front-end, one of the difficulties was navigating through the vast amount of tools that HTML5 and CSS3 provides for optimizing the web design process. As such, knowing which functionality to use to efficiently organize and structure the web app was challenging, especially for beginner-to-intermediate level programmers. Back End: One of our main challenges was figuring out how to use Firebase to create a webapp, as well as working on compatibility between our original dataset, user input, and other encoding idiosyncrasies. Using Python scripts in tandem with Firebase was also a challenge, as it was tough to figure out how to get it to work with the existing database in Firebase. Additionally, we were very ambitious with our original dataset, and found it difficult to make sure everything we set out to do would be completed by submission. Accomplishments that we're proud of Being able to design a web app and placing forth our programming skills into practice was definitely a huge accomplishment as for some of us this hackathon was the first 'traditional' hackathon we have ever attended. Getting the website to work from the back and front ends was also super rewarding and we're really proud of that. Fleshing out a viable product targeting real-world needs in such a short amount of time was ambitious, but we got it to work by staying organized, working hard, and communicating effectively. What we learned For beginners on the team, it was a strong learning experience when it came to GitHub and how it can be used to collaborate with other programmers. On the front-end, we learned how to use more complex features of CSS3, such as responsive design aspects. We also learned how JavaScript can be integrated to provide functionality to a web page. On the back-end, we learned how to use Firebase to effectively analyze and filter our data to provide the infrastructure of our project. Additionally, in wrangling this giant dataset with Pandas, we learned how to deal with conflicting data structures, and work easily between dataframes and JSON formats both through Firebase and standalone scripts. This connection portion through Firebase was something we all stepped into without any experience, so being able to have a functional project in a platform none of us had used before was a huge learning experience. What's next for LocalSzn There is still a wide variety of extensions we would like to implement, including as a geolocation API for more precise location tracking and price information, web scraping to allow LocalSzn to recommend nearby farmers' markets and letting users know if their desired produce could be found in nearby grocery stores. We'd also like to compare price information between stores to let the user know if they are getting the best deal locally. A "shopping list" feature where users would be able to make a list of whatever produce they'd like to buy as it comes up would be useful for keeping users active on the app, and notifications for produce they buy often being on sale would also be nifty, especially in tendem with this "shopping list" feature. On a more data-related note, having graphs for visualizing price trends for produce and providing more in-depth and accurate price tracking/estimation methods would be a priority in this context. Built With css3 firebase google-cloud html5 javascript nosql python react Try it out localszn.web.app github.com
LocalSzn
An interactive web app created to help you make eco-friendly decisions when choosing produce; helping you save money, support local growers, and reduce your carbon footprint.
['Ziniu Chen', 'Richard Yang', 'Anaïs Rojas', 'Scott Jiang']
[]
['css3', 'firebase', 'google-cloud', 'html5', 'javascript', 'nosql', 'python', 'react']
76
9,982
https://devpost.com/software/peersurance
Inspiration Natural Disasters, Accidents, Negligence and other such occurences are part of daily life now, and insuring our belongings provides some relief. However, the process for filing a claim, getting it properly evaluated and finally getting a payment can be a long and cumbersome process, especially in the present circumstances. We seek to address the pain point of filing a claim, getting it evaluated and getting fair compensation with PeerSurance What it does At it's core, PeerSurance is a smart claims filing and compensation platform. The concept works similar to modern auto insurance at rental companies. Step 1: You take a picture of the item you want to insure, file the amount you want it insured for and once the policy is accepted and active, you are insured. Step 2: A disaster happens, and your insured item is damaged, oh no! Step 3: You take a picture of the damaged item, and file a claim. Step 4: Multiple assesors will evaluate the claim ( the claim is shared) and vote on whether the claim is valid or not. Step 5: Once a threshold (determined by Insurance Policy) of postive votes is reached, the claim is accepted and payout is processed How I built it Azure Cloud Storage - Images, Media Azure Serverless Functions - Backend Logic, CRUD operations, Vote and claim processing React Native - frontend - mobile app ReactJS Python MongoDB - database hosted on Azure Challenges I ran into Expo problems on frontend Credentials and authentication on Azure Accomplishments that I'm proud of we overcame our problems and made a working system! What I learned Insurance formulas are not trivial! React Native Expo is very crash prone quality of code produced is inversely proportional to sleep deprivation What's next for Peersurance Incorporate Machine Learning Models to evaluate claims based on uploaded before and after images Domain,com domain shouldhaveinsuredmy.space Try it out github.com
Peersurance
Peersurance
['Ebtesam Haque', 'Muntaser Syed']
[]
[]
77
9,982
https://devpost.com/software/pool-v40qim
Pool What is Pool? GIF View thoughts GIF Add thought by typing GIF Add by voice The benefits of daily reflection Our tech stack Inspiration We live in a stress-filled society where the average person has over 70,000 thoughts racing through their head every day, but rarely do we ever stop and truly explore those thoughts in a constructive way. Studies show that just 15 minutes of expressive writing a day can lower blood pressure, improve liver functionality, and can even increase productivity by up to 23%. Social media has become the default outlet for self-expression, but it’s clear that social media has become a toxic space for many, which is why we created Pool. What it does Pool encourages you to track your daily thoughts in a safe personal space. Almost like a personal, private Twitter with a strong focus on self-reflection. Daily thoughts can be added by typing or talking to the app. You can continue adding or modifying your thoughts until midnight; after that, it’ll be preserved as an uneditable thought. To view your thoughts, you have the option to sort thoughts by most recent, or through a feature called on-this-day which will show you thoughts from the same date across years, months, or weeks. For example, if I pick Monday, August 17, then I can see thoughts I had every Monday, every 17th of each month, or every August 17th of each year. The app is also able to identify whether a thought's sentiment is positive, neutral, or negative (along with a confidence percentage), allowing you to easily identify and reflect on the different emotions in past thoughts. How we built it Pool's frontend was built on React Native and Expo using Javascript. We designed our prototypes and assets on Figma and Photoshop, then used libraries like React Navigation and Axios for API calls to bring those designs to life. Finally we deployed our app using Snack for everyone to use. Our REST API endpoints for submitting and fetching thoughts were built using Autocode which allowed us to bootstrap quickly and deploy painlessly. We used a relational database in the form of a Postgres instance hosted in the Azure Cloud for data storage. We made heavy use of Azure Cognitive Services: firstly, we connected our backend to the Text Analysis APIs for Sentiment Analysis and Key Phrase Extraction; then, we connected our mobile app to the Speech-to-Text service to allow users to transcribe their thoughts in real time by speaking--a more natural form of expression. Challenges we ran into The biggest challenge we had this weekend was scoping our project and deciding which features to build. We made a lot of designs in Figma and the team overestimated how much we could accomplish in this short time frame, but we addressed this problem by reprioritizing with all team members and narrowing down our design to the most impactful features. Accomplishments that we're proud of Working together as a team! We were mostly strangers coming into this hackathon, so being able to incorporate everyone's strengths - design, frontend, backend, product direction - together to create a holistic project is something that we are all extremely proud of. Also, the most important accomplishment, not pulling an all-nighter! What we learned We learned how to use React Native and Expo, design using Figma, created an API using Autocode, and dug extensively into Azure Cognitive Services. We all experimented with new technologies this weekend and had lots of fun doing it. What's next for Pool We have so many unrealized designs! Here are some features we would like to implement next: Stream View and Overview Keywords View - word clouds Analytics - to track trends Dark mode! Thank you to everyone who helped us this weekend and for making this virtual HackThe6ix an unforgettable event! -- Alex, Emily, Sally, Shazz Built With autocode azure-cloud azure-cognitive-services expo.io figma javascript photoshop postgresql react-native snack Try it out snack.expo.io
Pool
Pool is a personal thought taking app aimed at transforming the mental health space by building habits of daily reflection.
['Alex Hu', 'Emily Hu', 'Sally Zhou', 'Shazz Amin']
[]
['autocode', 'azure-cloud', 'azure-cognitive-services', 'expo.io', 'figma', 'javascript', 'photoshop', 'postgresql', 'react-native', 'snack']
78
9,982
https://devpost.com/software/cirrus-systems
Optimal Locations for Cloud Kitchens Inspiration While Covid-19 has decimated the brick and mortar restaurant industry, the online food delivery sector is projected to double in size in the next five years. Passionate about geography and intrigued by the recent rise of ghost kitchens (or cloud kitchens), our team sought to leverage cartographic data to help restaurants expand their operations through ghost kitchens. What it does Cirrus Systems leverages extensive databases from Geotab and other publicly available databases to determine the optimal locations for entrepreneurs and companies to set up cloud kitchens. We took into consideration regional population density, rental prices, and traffic congestion, and developed recommendations for cloud kitchen locations with low rent, low congestion, and high customer concentration. How we built it We built our demonstration based on Geotab’s Ignition tool, which aggregates data from fleet vehicles into geography-based statistics. We leveraged the FuelStationMetrics and IntersectionMetrics datasets to determine the best locations for cloud kitchens and optimize for delivery efficiency. Using SQL, we joined data from the two datasets to find the optimal locations for cloud kitchens to reach the most customers and minimize delivery time. Challenges we ran into The large volumes of data needed to form meaningful conclusions often resulted in us having to analyze tens of millions of data points, which dramatically hindered software performance. The difficult part was paring down the data, choosing which data was important to use in our analysis and how to filter the data in an effective way while maintaining efficacy of our system but increasing performance to an acceptable level. Accomplishments that we're proud of None of us have a coding background, and it was a proud moment when we pieced together the datasets that we need along with help from the mentors at Geotab. It was also a proud moment when the SQL code worked as expected (Toronto-only data points) instead of returning data that we knew was incorrect. At the end, we had just a few simple lines of SQL code that took us hours to get to. But we suppose that’s the fun in coding :) What we learned Our team was fascinated by the power of big data and how combining multiple sources of databases can yield highly useful insights. Coming from Nanotech, Health Sciences, and International Business backgrounds, we certainly learned a lot from our first Hackathon experience. What's next for Cirrus Systems We look forward to engaging with Geotab and using their feedback to improve on this technology. Moreover, we would be interested to further investigate the commercial feasibility of this project. Built With sql
Cirrus Systems
We support hyperscaling cloud kitchens in finding optimized locations for their centralized operations and move the online food delivery industry forward.
['Zhenle Cao', 'Chrs Zhou', 'Wee Nie Tham']
[]
['sql']
79
9,982
https://devpost.com/software/songsmith
🎶 SongSmith | New remixes at your fingertips ! 🎶 SongSmith is your one-stop-shop to create new remixes from your favourite songs! 🎼 State-of-the-art AI Machine Learning Neural Networks are used to generate remixes with similar styles🎨 to your best-loved songs🎶. 📚Discover! 👨‍🎤Inspire! 🎧Listen! SongSmith! Inspiration 💭⚡ Ever listen to your favourite artists🧑‍🎤 and songs⏭️ and think "Damn, I wish there was similar music to this?" We have this exact feeling, which is why we've developed SongSmith for the Music loving people!🧑‍🤝‍🧑 How we built it 🏗️ SongSmith⚒️ was built using the latest and greatest technology!! 💻 Our music generative neural network was developed using a state-of-the-art architecture🏢 called Attention Mechanism Networks . Tech Stack 🔨 AI Model: Tensorflow , Keras, Google Colab BackEnd: Express.js, Flask to run our microservices , and inference servers. FrontEnd: Developed using React.js , and Bootstrap; to allow for a quick MVP development cycle🔃 . Storage: MongoDB, Firebase , Firestore Moral Support: Coffee☕, Bubble Tea🥤 , Pizza🍕 and Passion💜 for AI <3 Challenges we ran into 🧱🤔 Converting the output from the neural network to a playable format in the browser Allowing for CORS interaction between the frontend and our microservices What we learned🏫 Bleeding💉 Edge Generative Neural Network for Music🎙️Production Different Python🐍 and Node.js Music Production Related Libraries📚 What's next for SongSmith ➡️ Offering wider genres of music to attract a wider range of music-loving users !🧑‍🎤 Refining our Neural Network to generate more high-quality REMIXES ! *See Github ReadMe for Video Link Built With bootstrap express.js flask javascript keras node.js python react tensorflow Try it out github.com songsmith.herokuapp.com
SongSmith
🎵 Create new remixes from your favourite songs using new ground-breaking AI!
['alyssaazhang', 'Aditya Mehrotra', 'Jason Hou']
[]
['bootstrap', 'express.js', 'flask', 'javascript', 'keras', 'node.js', 'python', 'react', 'tensorflow']
80
9,982
https://devpost.com/software/prepsci
Inspiration Amidst the global pandemic, it is getting increasingly more difficult for students to find study groups, educational support, and more specifically, tutors. We knew that now more than ever, a service that allows students and other users to utilize common resources around them was needed. PrepSci is a platform based on the fundamentals of the sharing economy, which also allows any person to participate in the community, generating jobs and opportunities for all involved. This is something that we thought could be very beneficial to many that suffer from unemployment and lack of educational support due to COVID-19. What it does PrepSci is a mobile/desktop platform that matches tutees with tutors based on subject specificity, price range, location, and other factors. Based on the sharing economy, PrepSci is a space where anyone with a smartphone can get involved, provide educational support, and earn a bit of cash. The algorithm is designed to find a tutor based on the interests selected by the tutee and provide them with their top matches. All tutors that sign up with this app will be required to show proof of experience and/or any certifications, which will publicly be disclosed on the app to aid in helping tutees find the right tutor. Tutees are able to find the perfect tutor that is credible and tailored to their needs and schedules. How we built it We started PrepSci with an app prototype, to visualize how we wanted the app to look like and any features we wanted to showcase. This allowed us to see the goal we had in mind, and cater to any other ideas as the next steps. After visualizing, we began to create the backend code via Python. We made classes for the tutors and tutees and began hard coding the instances to see how the database would look like in the future. Our next steps are to integrate all of this code using the Google Cloud API called App Engine/ Challenges we ran into Given the vast potential of our project in areas of optimal matching criteria, group study sessions, location optimization, subject intensity, our team had an extensive discussion at the start of the design process to determine the best approach. The discussion was extensive and resulted in a slower start to the design process, however, it helped us lay out a tangible pathway to follow and helped us efficiently overcome many hurdles before they arose to become major issues. Accomplishments that we're proud of PrepSci is something all of our teammates felt deeply about, as it connected with issues that we were facing personally. Everyone in the PrepSci team will attend university in the fall, where we are exposed to new learning styles and changes in educational support due to COVID-19. Having an academic community is vital to the success of a student, and so is having ample job opportunities in such devastating cases of unemployment. PrepSci offers a chance for students to connect with those around them and creates paid opportunities for those looking for work. We are really proud of PrepSci as we truly believe it could make a difference to so many people amidst the pandemic. What we learned Through the process of developing PrepSci, we have gotten yet another chance to appreciate the power of our skills and apply them to the real world to get a sense of its true impact. The opportunity to build both the back-end engine and develop the business strategy for the project gave us an opportunity to appreciate the business model and develop our model based on the user interest. Furthermore, the team experience where every individual brings in a unique skill set formed a dynamic team that resembles project implementation in the real world where individuals from all different fields come together to find the optimal solution through exceptional communication and teamwork. What's next for PrepSci After its launch, PrepSci is intended to increase its scope by incorporating several different features. PrepSci intends to include additional options for tutees to choose from to help tailor their needs more effectively. These options are designed to find a tutor that perfectly incorporates the student’s values, interests, and needs in their teachings. The PrepSci team is also currently conducting research in order to incorporate cryptocurrency to aid with payments and transfers as well as chatbots to help users and expedite the process. We hope to partner with LinkedIn in the near future for user login and credibility as well. Built With proto.io prototype-geoip python Try it out github.com
PrepSci
PrepSci is a mobile/desktop platform that matches tutees with tutors based on subject specificity, price range, location and other factors. Within the sharing economy, PrepSci uses common resources.
['Fatima Jangda', 'Ali Meshkat', 'Yawar Ashraf', 'Eeman Salman']
[]
['proto.io', 'prototype-geoip', 'python']
81
9,982
https://devpost.com/software/dunelist-w20m1v
Get stuff done. Inspiration We looked at ourselves these past couple months during quarantine and thought about what have we been struggling with, and the biggest thing was finding and keeping the motivation to get things done. We all found it super encouraging to be able to check things off and see a record of what we had done during the day, because sometimes you get a lot done, but other days it's the little victories that are all you can manage. We found that a lot of to-do list and tracking apps on the market right now require a ton of setup from the user to even begin using. We wanted to change that. What it does Dunelist is a lightweight web-based task manager that helps to keep you focused without needing much setup like many other typical productivity apps. It lets you add and remove tasks and sort them via various tags, while also showing you a progress bar to help you see how much of your work you've managed to complete for that day. How we built it We planned the features by talking through our own experiences with existing to-do tools and what we feel was lacking from them. When we were ready to put it into code, we used a React framework along with MongoDB. The team used GitHub to collaborate on the code. The graphics were done with Sketchpad. Challenges we ran into Getting the backend to work took some learning. We started with a dummy database and then moved toward linking the frontend to an actual database to preserve checked and unchecked properties each time the interface reloaded. Accomplishments that we're proud of Linking the backend database, and creating the progress bar. We're also proud of the fact that we managed to create a hack for our first remote hackathon event! What we learned It was fun challenge learning how to use React and MongoDB to develop a web app from scratch since none of us have had much experience working in that space. We also learned how many names for to-do list tools are already registered out there! What's next for Dunelist We want to keep expanding on the functionality of web app, particularly the categorization of tasks. We want users to be able to sort tasks by category and track progress bars of each one separately. We'd also like to implement a summary of the week that highlights some of the key things a user completed, introduce an optional way of adding weightings/significance to tasks to alter the speed that the progress bar fills up, and some general UI updates to make the entire web app a little more enticing to use and interesting to look at. Beyond that, we were playing round with the idea of developing a web extension of the app that would have a little icon in the corner of your screen that you can click on and add tasks with from anywhere, rather than needing to go to the web app's tab/window. Built With css html javascript mongodb react Try it out github.com
Dunelist
Get stuff done. A straightforward and low-commitment way to keep yourself focused during these tough times.
['Alicia Pan', 'Michael Hu', 'Ajit Rakhra']
[]
['css', 'html', 'javascript', 'mongodb', 'react']
82
9,982
https://devpost.com/software/womenshealthiswealth-mnlabr
https://docs.google.com/presentation/d/1B5oH27ELnDIZVyHST2fWJnGV1jWtXC_997bXXXNJw-w/edit?usp=sharing womenshealthiswealth Women account for 85 percent of all purchases and drive 70-80 percent of all consumer spending. -Jana Matthews Professor and director of the Australian Centre for Business Growth, University of South Australia. Yet according to according to a Global News Article posted this February ( https://globalnews.ca/news/6281350/financial-abuse/ ) between "94 and 99 per cent of women in abusive relationships have been subjected to some form of financial abuse", unable to handle their own money, forced to accept terrible credit scores and assume debts they aren't responsible for. Abusers take total control of the household finances and keep women "in the dark about the family’s assets and liabilities and unable to access or open bank accounts". This webpage exists with login credentials accompanied by a username. A shorter password takes you to a webpage with regular posts on women's health (information about menstruation, menopause, breast cancer, bacterial vaginosis, breastfeeding etc.). There's a section on recipes, and home maintenance tips. However, signing in with the same username but a longer/different password reveals a hidden layer of the website where women can access a support network of other women experiencing financial and other forms of abuse, easily connect with resources to help them sort out their financials & budget successfully without their abuser knowing, as well as access help from trained professionals to help them leave their abusive relationship. The YouTube Video attached demos our webapp. To learn more about the questions that drove us & what challenges we participated in check our other link, for a presentation (ppt) here: https://docs.google.com/presentation/d/1B5oH27ELnDIZVyHST2fWJnGV1jWtXC_997bXXXNJw-w/edit?usp=sharing Built With css express.js formik html javascript mysql node.js ramda react semantic-ui-react sequelize Try it out github.com drive.google.com docs.google.com
womenshealthiswealth
A dual-layer website allowing women to regain control of their finances & lives without their abusers knowing.
['Anjali5122 Parikh', 'Dianna McAllister', 'Sarrah Merchant', 'micaela consens']
[]
['css', 'express.js', 'formik', 'html', 'javascript', 'mysql', 'node.js', 'ramda', 'react', 'semantic-ui-react', 'sequelize']
83
9,982
https://devpost.com/software/safety-buddy-l1phnt
Safety Buddy Logo Inspiration As women ourselves, we have always been aware that there are unfortunately additional measures we have to take in order to stay safe in public. Recently, we have seen videos emerge online for individuals to play in these situations, prompting users to engage in conversation with a “friend” on the other side. We saw that the idea was extremely helpful to so many people around the world, and wanted to use the features of voice assistants to add more convenience and versatility to the concept. What it does Safety Buddy is an Alexa Skill that simulates a conversation with the user, creating the illusion that there is somebody on the other line aware of the user’s situation. It intentionally states that the user has their location shared and continues to converse with the user until they are in a safe location and can stop the skill. How I built it We built the Safety Buddy on the Alexa Developer Console, while hosting the audio files on AWS S3 and used a Twilio messaging API to send a text message to the user. On the front-end, we created intents to capture what the user said and connected those to the backend where we used JavaScript to handle each intent. Challenges I ran into While trying to add additional features to the skill, we had Alexa send a text message to the user, which then interrupted the audio that was playing. With the help of a mentor, we were able to handle the asynchronous events. Accomplishments that I'm proud of We are proud of building an application that can help prevent dangerous situations. Our Alexa skill will keep people out of uncomfortable situations when they are alone and cannot contact anyone on their phone. We hope to see our creation being used for the greater good! What I learned We were exploring different ways we could improve our skill in the future, and learned about the differences between deploying on AWS Lambda versus Microsoft Azure Functions. We used AWS Lambda for our development, but tested out Azure Functions briefly. In the future, we would further consider which platform to continue with. What's next for Safety Buddy We wish to expand the skill by developing more intents to allow the user to engage in various conversation flows. We can monetize these additional conversation options through in-skill purchases in order to continue improving Safety Buddy and bring awareness to more individuals. Additionally, we can adapt the skill to be used for various languages users speak. Built With alexa amazon-alexa amazon-web-services javascript twilio
Safety Buddy
Safety Buddy is an Alexa Skill that allows users to simulate a phone call with a friend in situations where they feel uncomfortable or unsafe.
['Kara Kim', 'Amy Zhao']
[]
['alexa', 'amazon-alexa', 'amazon-web-services', 'javascript', 'twilio']
84
9,982
https://devpost.com/software/obviate-src
The landing page The customization dashboard Obviate Obviate, well, obviates the tedium of setting up static sites away. Created by Kewbish . Written in Vue. Made for Hack the 6ix 2020. Released under GPLv3. About I make static sites quite frequently, and I'm tired of constantly repeating the same steps to mock up a quick blog for a friend or something. So: Obviate was born. (And the name's just from my vocabulary book - I was studying the day before the hackathon). I had very little experience with OAuth and the GitHub API, and learned all about the implicit grants and various authentication methods during this hackathon. I gained a lot of knowledge regarding Javascript fetches, as well as learning about promises and such. I had a lot of issues with GitHub OAuth and Netlify OAuth (which I had to cut), from not being able to make a request for an authorization token, to not exchanging the correct format, to not having adequate documentation, and to literally redoing the same request 100s of times to test. My biggest challenge was forgetting that I had to stringify a request's POST body, and this caused a lot of confusion for me: why did it work in Postwoman but not in my app? I didn't end up being able to implement some of the features that I would have liked to due to that issue with POST bodies (and some other real-life things), so I had to cut the Netlify automation part of my app out, as well as not being able to get my site on Netlify. However, I've also been able to watch a lot of talks and network with a couple people, and I think this experience of getting a project from zero to MVP in about 16 hours of active work was an incredible learning experience. Access Right now, it's only accessible through localhost (haven't found the time to get it on Netlify yet). Run npm run serve and navigate to localhost:8080 . Clone the source, and place a store.js file as a sibling of the main.js file. This'll need to include a GitHub OAuth Client ID and Secret. (The application's callback doesn't matter at the moment.) Use Customize your Hugo blog's template, and click 'submit' on the customize page. This'll fork the template and place an action in to replace all the variables. NetlifyCMS is already integrated in the template, but you'll need to make your own OAuth app to verify in the Netlify dashboard. Roadmap In the future, I'd like to: properly integrate Netlify make the Action run on fork (currently blocked) add an account system to keep track of your Obviate sites and perhaps add a small monetization feature (I'd like to see how OAuth works with payment providers, or if I can get PayPal, Stripe, or GitHub Sponsors to work somehow) :eyes: Built With actions github html hugo javascript static-site vue Try it out github.com
Obviate
Obviate obviates the tedium of setting up static sites away - automation with fine-tuned customization for lightning-fast generation.
['Emilie Ma']
[]
['actions', 'github', 'html', 'hugo', 'javascript', 'static-site', 'vue']
85
9,982
https://devpost.com/software/super-eats
Motivation Everyone has been impacted by Covid-19, some were hit much harder than others. Yet, in this time of instability and uncertainty, one fact of life remains constant. People need to eat, to buy products necessary for daily life. However, even though stores have opened up with new safety measures such as special hours for elderly populations and social distancing guidelines, there are still people who do not have access to stores. For a plethora of reasons, such as a comprised immune system or old age, a subset of the population cannot visit grocery stores in-person. To combat this, many people choose to buy their groceries online, but not everyone can use websites and apps adequately. Members of our team have close relatives who are now highly dependant on others to perform this basic task, hence our primary goal is to make grocery shopping accessible for all people. Description Our solution is an automated natural language processing grocery store hotline to break down technical barriers during the pandemic. Users can place orders for groceries with ease, and essential workers can receive and fulfill them without coming into contact with anyone. Thus protecting everyone involved and ensuring easy access to essential items. Some features of Super Eats: Grocery stores have a specif number that clients can call to place their order An automated message which plays to guide the user through placing their order A web app the Grocery stores can access which stores a plethora of information including but not limited to; An inventory of items which is updated after every call An archive of orders A dashboard that includes useful visualizations of data for the grocery store Azure Challenge How did we choose what services to use? Azure Language Understanding- link Azure’s language understanding service (LUIS) was the foundation of our project. We knew early on that we wanted to experiment with an emerging technology in the fields of artificial intelligence and machine learning, and LUIS provided a clear pathway for doing so with a low barrier to entry. Ultimately, it was a springboard for our final idea, an automated natural language processing phone line to break down technical barriers during and long after the coronavirus pandemic. LUIS provided us with an intuitive user interface where we could enter example speech utterances and carefully annotate each and every segment of it with highly configurable speech entities. After annotating only 10 examples, LUIS was able to break up an order into its fundamental components, which included name, address, and order items. After annotating 30 examples, it could pick up even more complex speech patterns and parse things such as quantity, form, product names, weights, street names, and zip codes. Azure Speech To Text Service- link We originally did not plan to use any Azure services beyond LUIS. However, we quickly discovered that the inbuilt transcription features of our communication API, Twilio, were not up to the task. Addresses were often indecipherable, names were misspelt, and the transcription overall was incoherent. This became a blocking issue, as LUIS depends on proper transcription of phone call audio. We were delighted to discover that Azure offers its own Speech to Text service, which both has easy integration into LUIS, and offers much more accurate transcriptions. Although our project is based in node.js and there is as of yet limited implementation for speech to text in javascript, we were easily able to navigate a variety of quickstart tutorials of other programming languages in the azure documentation. We ended up implementing our speech to text in python, and it led to such accurate transcriptions that we stopped worrying about it since. Azure Web Services and Azure static Web Services- An essential part of our application is of course for it to be not only served locally and accessible to a few of us but to be as production-ready as possible. In order to simulate a real startup business, we deployed our application which consists of a client and 2 different servers to Azure. Our servers are currently served in Azure Web Services whereas our client is currently deployed on a static Web app. Azure-hosted MongoDB- A database is the center of all the data in a full-stack application. For our implementation, we needed to provide a way to persist our data without it being lost whenever we are closing or refreshing our server. It was of utmost importance to keep track of our items and orders in one of the most reliable databases on the market and hosted on quality, low downtime servers provided by the Azure team. Azure Maps- Thanks to the growing community of developers who work with azure cloud, we were able to find a seamless node.js integration to work with azure maps through an npm library. These maps added both depth and utility to the user interface and opened up a variety of creative ways to display order information. We used the maps service to create a heatmap of orders on our site dashboard, and a map with a pin marker on each individual order page. We hope to extend functionality further by finding the most efficient routes from the store to a given destination and use this information to calculate delivery price estimates. How Does Super Eats Work? To begin, we used Twilio to acquire a phone number with which we set up an Express server so that every time a call was received, a request was made onto our server. When the request was received, a message played, after which we began recording the user's voice through Twilio. Once these calls have ended, that data is sent to and transcribed by Microsoft Azure’s speech-to-text platform. We must then break this data up into its relevant parts, this is where Azure LUIS comes in. Our LUIS model sends us the information the grocery store needs to set up a translation. The last step in our implementation is sending this information to our database in MongoDB, used as a part of our front-end. This database is what is displayed on the web app that the stores can access a user-friendly synopsis of each order. Challenges We Ran Into One of the biggest challenges we faced was speech transcription, we first began with the Twilio platform. Twillio would directly transcribe the audio file to text, which turned out to be very inaccurate. In order to rectify this, we decided to implement Azure’s speech to text. Twilio would still record and download the audio file, but not Microsoft’s speech to text platform would transcribe the audio files. Another challenge was figuring out how to deploy, this too was solved using Azure. What's next for Super Eats Beyond the current pandemic, Super Eats can be used as an alternative to the way we shop. Over the years we have seen an increase in the percentage of the population who prefer to shop remotely, thus an accessible remote shopping experience is marketable and effective. We would love to see how we could expand this project further, being able to recognize orders in different languages for example. Built With azure azure-luis azure-maps azure-speech-to-text azure-web-server bootstrap express.js mongodb natural-language-processing node.js python react recharts twilio Try it out happy-moss-039897310.azurestaticapps.net github.com
Super Eats - Easy Grocery Shopping for Everyone!
An automated natural language processing grocery store hotline to break down technical and physical barriers during the Covid-19 pandemic.
['Ridam Loomba', 'Faraz Hussain', 'Alif Munim', 'Alexander Vorobev']
[]
['azure', 'azure-luis', 'azure-maps', 'azure-speech-to-text', 'azure-web-server', 'bootstrap', 'express.js', 'mongodb', 'natural-language-processing', 'node.js', 'python', 'react', 'recharts', 'twilio']
86
9,982
https://devpost.com/software/doc-home-ynp9f5
Inspiration The recent coronavirus had bought the entire world to a standstill. We wish to bring about a small change to this scenario and that was what inspired us to build this app What it does Doc@home The spread of the pandemic has caused a lot of changes in our lifestyle, people fearing to get outside their homes, transportation almost shut down and social distancing becoming all the more important. While the physical impact of COVID-19 can be assessed and treated, the mental impact is often ignored. Many people are experiencing emotional breakdowns and having depression. The new reality of prolonged quarantine period and lack of socialization are adding up the mental struggle. More than 50% of those suffering from mental health issues are not seeking treatment, due to social stigma. Doc@home is a web app build with an intention for helping to cope up with the current COVID situation. The app has 3 primary functions:- 1. Became an AI companion for those in quarantine The new reality of prolonged quarantine period and lack of socialization with other family members, friends, and managing the fear of contracting the virus and worry about people close to us who are particularly vulnerable, are all challenging for each one of us. This present environment in the country has increased stress and anxiety amongst the citizens. Introducing to you- Jessica, your very own AI companion. Jessica can help you overcome your isolation and loneliness 2. Provide a connecting medium for doctors and patients Sometimes a chatbot might not be enough. You might need to seek professional help The fear of traveling has resulted in the hampering of checkups of many people. Doc@home, helps patients to take their routine checkups from the comfort of their homes without the scare of the virus. It helps any patient to connect with the doctor of their choice, from the list of available doctors, and book an appointment according to their convenience either in offline or online mode. The app has 2 portals, one for the patients requiring the appointment, and the doctors available for consultation. The patients can enter their medical history, any undergoing medication, etc which can be viewed by the doctor if he/she confirms their appointment. The index page of the patient and the doctor shows all their upcoming consultations. The doctor is notified if there are any patients requiring their appointment, and have the option to either confirm or deny, which causes a corresponding notification to be sent to the patient. The doctor can also send the prescription with detailed feedback to the patient after consultation, along with the consultation fee to be paid. The patient can view the feedback in the history tab of their portal, which also provides the facility to pay the consultation fee by redirecting them to the Google Pay web page. 3. Create a forum for discussion Another big issue that we face during this time, is the spreading COVID related to fake news. News with no scientific basis like standing in sun can kill corona and your pets can spread rapidly Corona for COVID are rapidly spreadly It is estimated there over 800 deaths have occurred due to WhatsApp fake msg Doc@home also tackles this issue We have created a forum (inspired by stack overflow website) that could help people differentiate fake news from the real ones. Here people can raise doubts regarding different aspects of COVID and get answers from professionals and other peers. It is a community-driven platform where doctors and ordinary people can meet and help each other out Why Doc@home ⚕️ Easy and affordable medical consultations, at the comfort of your home. 📹 Video call + detailed prescription from verified doctors 🤯 Combats the social stigma surrounding the seeking of help for mental health issues. 🧘 AI companion for managing daily stress, anxiety, and depression issues. 💊 One-stop solution to keep track of your medical history and remind you to take your pills. 📱 Platform to verify the truth behind viral messages surrounding health and stop the spreading of fake news. This is a humble step from our side hoping to bring our world to normalcy with the pandemic on the spread. How we built it With sleepless nights and coffee :) Tech stack The front end was built on vanilla javascript, jquery, and bootstrap. Flask was used as the backend and MySQL as the database Challenges we ran into Configuration trouble, the difficulty is retrieving data from DB What's next for Doc@home Need to improve the chatbot Add ethereum code to the payment (the payment between the doc and user) and also in the forum section. This would provide more transparency and security to the platform Add geolocation feature, which would allow users to input their location, as well as other details such as their insurance provider, for the app to create a shortlist of the names and locations of mental health providers in the user’s area. This will help users who may prefer face-to-face to find help. Built With bootstrap flask html5 javascript jquery sqlalchemy sqlite Try it out github.com
Doc@home
A web app to combat the current corona pandamic
['annu Jolly', 'Anagha Sivadas']
[]
['bootstrap', 'flask', 'html5', 'javascript', 'jquery', 'sqlalchemy', 'sqlite']
87
9,982
https://devpost.com/software/clean-water-detector-app-that-detects-cleanness-of-water
Instruction Screen Home Screen SINCE THE MODEL IS UNDER 1ST PHASE OF DEVELOPMENT PLEASE USE A WHITE SURFACE FOR KEEPING THE GLASS/BOTTLE SO THAT THE MODEL CAN PREDICT ACCURATE RESULTS DOWNLOAD SAMPLE IMAGES FROM THIS LINK link DOWNLOAD APP FROM LINK AT BOTTOM Inspiration Dirty water is dangerous In Africa , more than 315,000 children die every year from diarrhoeal diseases caused by unsafe water and poor sanitation. Globally, deaths from diarrhoea caused by unclean drinking water are estimated at 502,000 each year, most of them of young children. Every year 575,000 people die from water related diseases. This is equivalent to a jumbo jet crashing every hour. Most of these people are children (2.2 million). Unclean water and poor sanitation have claimed more lives over the past 100 years than any other cause. The water-crisis claims more lives through disease than any war through guns. 844 million people lack access to safe drinking water. This is more than the combined populations of the United States, Brazil, Japan, Germany, France and Italy. What it does It basically calculates the cleanliness of the water with the help of its machine learning model that I made, It than shows the results indicating how clean or dirty the water is. The Covid-19 Detector is a complementary thing that I added just to show the power and usefulness of AI. It is currently in the beta version. How I built it I built it using Flutter, with flutter I have the capability to create both iOS and Android apps at the same time hence making the availability vase. At the back-end I used Tensorflow lite, to give my app the capability to use machine learning models in offline modes. Model is made using Trainable Machine powered by google cloud. Challenges I ran into Being a solo developer I ran through many Challenges but I succeeded on my goals and I am happy to deliver this prototype on time. Accomplishments that I'm proud of I am really happy to contribute this project of mine for the entire people of the world so that they can have access to clean drinking water What I learned I learned a lot through out making this app as it was a really challenging task What's next for Clean Water/Covid-19 Detector App If everything is going good with this app I would really like to release this app to the entire population, but before that I would have to give some more minor improvements to this app. Built With flutter google-cloud tensorflow Try it out drive.google.com
Clean Water/Covid-19 Detector App : iOS/Android compatible
Powered by Tensorflow lite model made using google cloud Teachable machine, Can detect the % of cleanness and dirtiness of water by just an image from your phone(even without Internet)!!
['Udipta Koushik Das']
[]
['flutter', 'google-cloud', 'tensorflow']
88
9,982
https://devpost.com/software/kidney-disorder-prediction
The kidneys are a pair of bean-shaped organs on either side of your spine, below your ribs and behind your belly. Each kidney is about 4 or 5 inches long, roughly the size of a large fist. The kidneys' job is to filter your blood.They remove wastes, control the body's fluid balance, and keep the right levels of electrolytes. All of the blood in your body passes through them several times a day. Kidney disorders varies from Urinary tract infections,Kidney stones or even chronic kidney disorders. the aim of this project was to build an web enabled system to predict whether a patient is suffering from kidney disorder or not based on features such as Haemoglobin level, Pus cells, Blood pressure, Age etc. I build this project using Machine learning techniques. the task was basically a binary classification problem statement. I chose Random forest classifier for my problem statement and it worked quite well. I used sklearn library to apply random forest on my dataset. project is deployed on Heroku Cloud using Flask API. I faced challenge of data cleaning the dataset initially was very messy and it required a lot of preprocessing to be model ready. In building this project i gained a lot of experience of working with medical dataset along with that i read a lot about all the features of the dataset and thereby it helped my enhance my knowledge and get some domain knowledge about medical datasets. Next i am looking for making an android app out of this project so that it can be used by large mass of people !! Built With flask heroku machine-learning python Try it out github.com kidney-disorder-prediction.herokuapp.com
Kidney Disorder Prediction
Kidney disorder Predictor
['Kartik Mishra']
[]
['flask', 'heroku', 'machine-learning', 'python']
89
9,982
https://devpost.com/software/tora-70zn3e
Less talk, more action Inspiration In the era of social media, it is easy for an individual to repost a petition and share information on their account. But how much of a difference do these actions make, and are they truly helping a cause or are they inadvertently spreading misinformation about it? Information about social justice issues on social media lacks credible sources and is scattered. People’s need for quick and efficient information does not grant most users the patience to verify information themselves . This means that when people repost something on social media, there’s a fair chance that the information isn’t correct. In the cases where it is, the posts gain traction for a short period of time before the movement is forgotten from people’s mind again. Our team drew inspiration from the disconnect between user’s actions on social media and the impact of these actions. We sought to create an app that maximized user’s actions in a quick and efficient way, while ensuring users are properly educated about movements before they undertake any helpful tasks to further a cause. This ultimately led to the birth of Tora. What it does Tora takes a task-based approach to give users a clear idea of ways they can support movements and maximize the impact of their actions. When users sign up for Tora, they are able to pick and follow social justice movements that they are interested in. Tora provided users with a task list and the duration associated with those tasks for each movement. From there, TORA provides users with a roadmap that helps users become well-informed on the movement and take actions that are both responsible and impactful. We believe the best way to become informed about social justice issues is through research and discussion. The admins of a movement are able to share articles and other sources of information on Tora with the participants to discuss. These articles will be fact-checked first via algorithms to ensure their sources are credible, allowing users a fair foundation to make their decisions. In future development, we plan to introduce a forum page where users can discuss and share their opinions about articles and other sources of information. By participating in these discussions, users can sharpen their judgement and gain additional insight into the movement. After they’re caught up on the movement, users are guided through different tasks to support a movement, such as signing petitions and contacting relevant officials. Each task rewards a different amount of energy points, which they can accumulate to reach different levels of participation within the app (e.g. Novice Activist, Intermediate Activist and more). How we built it After conducting extensive market and user research, we developed wireframes in Figma to map out the UI Design for our application. From there, we used Android Studio and GitHub to handle the technical aspects of the app.The main programming languages we used were Java and XML. Challenges we ran into When we first started off in Android Studio, our team struggled to figure out the right tools and view groups to use that would be appropriate for Tora. This was a trial-and-error process that we were able to overcome with time and research.Our project UI was very tedious to create, and as the project progressed we learned how to make our UI match our stunning design. Some problems we faced was fixing the view display so it displayed properly on our devices. In the backend, we were having some trouble setting up the interactions between the activities, and sharing user information between activities. Accomplishments that we're proud of We were able to pull through and complete this project while learning so much along the way! We were able to try out a new development platform that was completely new to all of us, and we still pulled through and were able to construct it. What we learned We learned many things during this hackathon, from the design aspect, to the technical aspects! One thing we learned was that designers need to take technical constraints into consideration while designing for an app. Especially given limited time, over-complex designs can take up a lot of time to rework or implement. From the backend, we had learned how to set up an Android app, and link it to a firebase real time database, as well as how to implement the feature authentication. What's next for Tora Due to our lack of experience, this project was definitely on the simpler side, however our team is interested in continuing the development of Tora after the hackathon. For the purpose of the hackathon, our team focused on developing essential elements of an average participant’s user interface. After the hackathon, we hope to develop additional pages for participants, such as a forum page, and add more features to existing pages. In addition, we hope to create/modify our interface to accommodate admins for TORA, which would allow reputable activists to set out tasks and resources for participants within the app. Security is also a big next-step. Activism right now can be dangerous (even in Canada), where organizers have been known to be harassed for setting up movements, and other campaigns. We hope to add strong security aspects to this app to provide users with a safe and proactive community through our forums. Built With android figma java studio xml Try it out github.com
Tora
Tora is a task-based app that aims to keep users up to date on social justice movements in their area through education and providing clear direction as to what needs to be done to further a movement.
['charlotte zhang', 'Aava Sapkota', 'Hanatk Kidwai', 'Ryan Fernandes']
[]
['android', 'figma', 'java', 'studio', 'xml']
90
9,986
https://devpost.com/software/covid-room-designer
Installation Guide Front Page Room Designer Designer Catalog 3D View Monte Carlo Simulation Configuration Monte Carlo Simulation Results Tutorial Page Inspiration We were inspired by the sudden and wide-scale reopening of companies, stores, and other entities that are beginning to happen. We decided that it would be useful for employers or store managers to be able to visualize their environment and predict how the environment will affect the spread of COVID-19. We had done some math modeling before, but we set out to learn how to do Monte Carlo Simulations. We felt that it would be both challenging and rewarding to learn this. What it does The application takes in user-defined inputs, which include a simple user-drawn layout of the office/rooms that will be simulated, and some constants. The application then uses a Monte Carlo simulation technique to simulate people moving around in the defined layout. This simulation models the spread of COVID-19 in the environment. By modifying some variables, one can find precautions that can help lower the possibility of infection. How we built it We started by splitting into two teams, one which handled the GUI for the application, and one that focused on the Monte Carlo simulation. Monte Carlo simulations are a set of methodologies that simulates some situation many times to generate results. Our implementation of a Monte Carlo simulation simulates a user given space and some people inside it. We then simulate the spread of COVID-19 through this environment. The Monte Carlo simulation was built with two major classes of interactions. We created two classes, Person and Square, which store a simulated person's data and environmental data. We then created the simulation by creating a Population of Persons and a Grid of Squares. Both Person and square classes have a function called tick() which are the functions that are called each tick of the Monte Carlo simulation. The driver function that does the simulation loads in configuration data and runs the simulation by calling the tick() functions. The GUI is written in Electron, a cross-platform system that allowed us to write desktop apps with HTML/CSS/JS. We used the React.js framework to allow us to reuse components and handle the complex component states of various elements in the designer. We also extensively modified existing room planning software to fit our needs, including working with the Electron sandbox. This room planner passes the room data via Inter-Process Communication (IPC) to a worker pool, which performs the simulation and returns the computed values, including exposure probabilities, and how much of the time social distancing would be broken with this given layout. Challenges we ran into There were a few challenges with creating the Monte Carlo simulation. There are many variables to consider when simulating viral spread. Many of these variables are not know, due to the relative novelty of the COVID-19 virus. However, we found some data from various scientific papers to allow us to set sane defaults for the simulation. Furthermore, since Node.JS isn't designed for computationally intensive programs, we had to implement worker pool and multithreading libraries to provide the power needed to perform the simulation. There were also a few issues with GUI development. Since we are new to React.js and Electron, we had a steep learning curve in learning the lifecycle concept of React components and how their states would interact with our application. Furthermore, the sandboxing of the compute and render processes, which are at the core of Electron made it hard for us to transfer data to and from the Room Designer and the Monte Carlo process. Accomplishments that we're proud of We're proud that we were able to learn how to develop and then implement a Monte Carlo simulation during this hackathon. We're also proud that the tool requires no programming or mathematical knowledge so that as many people as possible can use it. What we learned We learned how to use React.JS and Electron to write a user-friendly desktop application. We also learned how to multi-thread applications in JS environments to perform computationally intensive tasks in user-friendly ways, while maintaining the level of performance expected in traditional scientific research environments. What's next for Notus We would first like to expand on the functionality to allow for the application to model the room layout which has the least potential spread of COVID-19. We would like to take this to various businesses so that they can reopen safely, helping keep our communities safe while aiding the economy. Built With electron node.js opengl react Try it out github.com snapcraft.io github.com
Notus
Modeling the spread of COVID-19 to create various safer room layouts.
['Dev Singh', 'Arthur Lu', 'Brian Lu', 'jacob levine']
['Best Future Impact']
['electron', 'node.js', 'opengl', 'react']
0
9,986
https://devpost.com/software/modulus-7i30cv
Inspiration In light of the recent COVID-19 crisis, we’ve seen staggering demand for online courses as students grapple with a reality in which education is now delivered over the internet. But traditional e-learning platforms like Khanacademy struggle to keep up with the pace of demand, while LMS platforms like Canvas, which requires teachers to sign up as part of large, wealthy organizations such as school districts, are difficult to use and lockout small independent teachers that just want to continue teaching. And on top of all that, all platforms rely solely on one medium of teaching, such as Udemy through videos, and Edmodo through text, without regard for user learning preferences. What is Modulus? Modulus is an online education platform, similar in concept to Canvas or Blackboard, both of which are used by schools and universities around the nation. But unlike existing platforms, Modulus directly integrates the VARK learning styles - a psychological framework for teaching - into an incredibly simple to use, modular course structure that anyone can use to teach anything. The result is a fairer, more accessible, and more equitable online education for everyone. Modulus Features Modulus includes VARK profiles, which are charts that display the proportions of different learning styles for a course or a user. Across the entire user interface, the colors and learning styles used in the profiles are consistent, which means you can tailor your education to your learning preferences. Fast, responsive, and intuitive, with no bloatware, unlike other LMS solutions that disadvantage those with poor hardware, slow internet connections, and little tech-savviness. Peer-to-peer: our platform lets anyone create, upload, and share courses, with the idea that we can recreate the Montessori model of learning in a digital environment. How is Modulus used? Modulus is used to create a digital classroom online, where teachers can post courses, assignments, lectures, and tests to share with students anyplace, anytime. Our goal is to recreate the best parts of modern educational methods, from VARK learning models to Montessori peer-to-peer instruction, in an online environment, so that as a society progress can continue to be made in the field of education, even from home during quarantine. How I built it We used React to develop the front end for the web application, while integrating with the Google Firebase service for backend database operations. For the landing page, we used Bootstrap, and React for the web app educational platform itself. Challenges I ran into This was the first time that our team used Firebase Google Cloud services for user authentication and data storage, so it was difficult to integrate that into our web app, which is written in React, a web framework we had learned for our first hackathon only two weeks ago. We encountered lots of issues thus with merging these new technologies together and deploying them successfully on Heroku. Accomplishments that we’re proud of Despite having just learned Firebase, and only having two weeks of experience with React and Bootstrap, we managed to do the following: A fully functional web platform, with an intuitive and extremely fast design. Full integration with a cloud-hosted database backend that tracks course enrollment for our individual users Automated emailing for password recovery Integrated course creation into the platform Anti-bot services like Recaptcha What's next for Modulus Our team hosts a tutoring service for middle school and high school students who either want to catch up or get ahead during this difficult time, so we plan on using this platform ourselves to promote education for all. Who we are High School Juniors from Seven Lakes High School, in Houston, Texas Daniel Wei - danielwei15#3016 Ryan Ma - GoblinRum#8553 Haoli Yin - Nano#4890 Built With bootstrap cmd css3 express.js firebase google heroku html5 javascript node.js npm react recaptcha research Try it out modulusplatform.site github.com
Modulus
An online education platform that directly integrates VARK learning styles for efficient online learning
['Haoli Yin', 'Daniel Wei', 'Ryan Ma', 'Mohamed Hany']
['Best Educational Impact']
['bootstrap', 'cmd', 'css3', 'express.js', 'firebase', 'google', 'heroku', 'html5', 'javascript', 'node.js', 'npm', 'react', 'recaptcha', 'research']
1
9,986
https://devpost.com/software/germ-blocker
Thumbnail Pulley System The circuit The box Inspiration My family gets takeout a lot. At our popular local pizza parlor, during the COVID-19 lockdown , employees typically stand outside rain or shine delivering takeout food to your car. The problem is that this isn't truly contact-free pick-up of food. We are still interacting with people who could be infected without knowing and surfaces that they have touched potentially increasing the spread of the virus . What it does Instead of handing the food/drinks for restaurant takeout, the people in the restaurant place the food/drinks in a box, then visit a link with a password (like restaurant.com/pass1) , making the box closed and secure with ultraviolet light killing all germs on the takeout containers. Then they send another password (like pass2) to a customer when they order takeout. When the customer comes, they go to the box and visit another link (like restaurant.com/pass2) , making the box open again. Then, they could pick up the food and go on their away. This process reduces the risk of transmission in takeout significantly and reduces the amount of people who need to be actively managing the pickup process, allowing restaurants to better allocate employees. How I built it I used an Arduino MKR1010 , a step motor, a box, leds, breadboard, and a few other miscellaneous parts to build it. I programmed it using C++. The Arduino creates a web server, and when the user makes a get request to the server by visiting a link, the box opens or closes using the stop motor and a pulley system with threads. Challenges I ran into I was limited to the supplies that I had in my house so I created a pulley system powered a step motor to open and close the door. The threads often got tangled up when the motor was pulling them because I was only using 1 motor and it was opening and closing a big door. For a better prototype, I would create automated hinges to open and close the door. Accomplishments that I'm proud of I was able to open and close a physical box using the internet . What I learned I now know how to use a step motor with Arduino and create a web server with my Arduino. What's next for Quarantine Pickup I'd like to increase the quality of my prototype using better materials and redesign the open/close mechanism. I also can make a more secure, and user friendly website for customers to interact with, and restaurant employees to administer. I believe that as well as reducing contact during pickup this also has the potential to increase efficiency for a restaurant after lock-down since less people will need to manage the pick-up process. Built With arduino c++ iot Try it out github.com
Quarantine Pickup
It uses a lockbox which can be opened or closed using the internet containing a UV light to drastically reduce the risk of transmission of germs and viruses during restaurant takeout pickup.
['ram potham']
['Honorable Mention', 'Best Community Impact']
['arduino', 'c++', 'iot']
2
9,986
https://devpost.com/software/latexdocumenteditor
LaTeXDocumentEditor For many students and teachers in today's age of online learning, it is very difficult to prepare organized notes in an easy and efficient fashion. EzLaTeX brings the power of the popular academic document processing language LaTeX with a plethora of convenient features and no prior experience required. Simply create an EzLaTeX project and select 'insert', and several options become available. You can add equations to your document by typing them in natural language or by uploading a handwritten picture. EzLaTeX will interpret it either way and create LaTeX code to represent it. Moreover, for any chemistry students, EzLaTeX allows you to insert a molecular structure image simply by inputting a molecule name. The image to equation feature supports chemical equations as well. To organize your information, you can delineate everything into clean sections and subsections. At the end, you'll get a nicely formatted PDF of your project. Built With python tex Try it out github.com
EzLaTeX
EzLaTeX offers the ability to create formatted LaTeX files with equations and molecular structures with natural language processing or an image of handwritten text.
['Nikhil Sreedhar', 'Girish Hari', 'Sohil Kollipara', 'Arnav Pangasa']
[]
['python', 'tex']
3
9,986
https://devpost.com/software/3d-health-hack
this is a photo of my mother Dr ibtissam and I am very proud of her/ shes my secret power Doctors using our face shield production face shields 3D printed face shield mass production help of Lebanon response team main corona virus center in lebanon Hospital Rafic Hariri accepted the design of teh face shield, and need large quantities Superokk logo some web photo Some web photo We have better quality videos in the YouTube channel https://www.youtube.com/watch?v=QcwDAETHzvg what are main solutions proposed? 1_ Self employment job opportunities (during and after the covid_19 period) 2_ Provide to hospitals personal protection equipment's. (specially in countries of bad economical situation) Story: The progress is divided into phases we are now at phase 4 Phase 1: After the lock down of countries due the covid_19 shipping was not an option. Solutions should be cheap and Manufactured in the country. we used 3D printing to solve this issue. (you may think that 3d printing is slow and is not efficient for mass production but we invented a 3d model that reduces the time of 3d printing of Personal protection equipment face shield from 6 hours to ~2 min). phase 2: After that our country Lebanon was not able to afford to buy face shields and ventilators along with other medical equipment's. here a team of professional engineers where formed (Lebanon response team) and we proudly implemented most of the projects including the face shield using the 3d technology. We where able to support the main corona virus center in Lebanon with the needed equipment's. It was all done remotely and here is the key for the phase 3. phase 3: After that a separate team was formed ( France, Holand , Lebanon) to participate in a hackathon sponsored by the " Freiderish Noamann foundation" . The Idea now is how to transmit the same success to other countries "Remotely". And we did it, we won. We where able to get sponsored . We are building a platform (website ) links 3d printer owners to people who need 3d printed things. Advantage of such website we will be able to : 1_ Provide self employment job opportunities during and after the covid_19 period. 2_Help specially Countries that are facing bad economical situation to provide 3D printable personal protection equipment's without shipping row material! 3_ Creating a worldwide community that is interested in 3D technology. the estimated time for the website to be done is ~4 weeks from today Meanwhile we should proceed to phase 4 : phase 4: Currently We are here - Marketing strategy- Social media : _YouTube channel: _we realize after studying the market that , we need the process to be very interactive to succeed. Since we are more into the "polling " business model concept than into the " pushing "concept we believe that building an educational platform on YouTube will generate lead's to the mechanism sustainably. So the main idea now is to provide high value content on the YouTube platform to generate lead and build a Brand. technique to be used: After studying the way YouTube promote it's videos. (based on views and how much it's interactive). * 1_ Animation: * Higher the efficiency (click throw rate). 2_ Interactive videos: To maximize interactive part we will include the new technology of interactive video. 3_Business startups: we will provide users with methods for effective uses of the program. 4_Multi-linguistic: we will upload in several languages so we can spread the message to the maximum. 5_Daily uploads: YouTube promote videos of users who uploading more (High quality content). Main points How the website will help medical field? web will locate institution and people who need 3D printed solutions (example face shield) What differentiates us? • Marketing strategy that we developed this weekend. • The sustainability in the business model. • Community : We are joining people that has same goals together on this platform _ one of our goal is to help the medical : even if it is a doctor that needs 3d modeling to medicine teeth of his/her patient. Team and how we started briefly : Clearly results started to appear hospitals show a need and we where able to provide them with their needs. So a team of expert was found to make this solution achievable by other country. Team: Ezzedin ayoubi senior lead Senior Lead in development at KLM ,DEV Team mentor Hassan Hallal Team Leader at ASSYSTEM E&I ,Business, Management(mentor in business management ) Ali Hussein: mechanical engineering student founder of Superokk. 3D printer owners: we have a big database of people who are welling to start providing Hospitals and even people. Any skilled in anything he is thinking that he can be an added value kindly don't hesitate to contact us. Accomplishments that we're proud of: We are proud that we where able to provide main covid_19 center in Lebanon with the face shields needed last week, and that we are able to make some profit to make it sustainable. Won several competition Ranked Top 15 in worldwide hackathon of IEEE What we learned: How to make from the weakness a point of strength. Group work using social distance. tech to use in execution _ In labs doing experiments for compatibility if someone want to add a new product (like sterilizing test) _the majority of it will spent on software's that will make the process easier: we list: _upgrade an animation software called **** (I have it but we need to upgrade enterprise level and we need the color versions) _powerful animation app called **** that will give us the ability to make character moving (and to tell stories ,to make the contempt extremely easy to understood) _this software is extremely powerful (specifically for us). 3D printing mean customization ,and this give us the abilities to customize any video to all our viewers!! _will help translate ,** text to speech ,speech to text , join every slide to the voice inserted **without the suffer and the time loss of meshing _intro videos and finalizing in high professional videos in matter of minutes (first to do that it was taking days !!! ) we want to make a difference in the digital age. The Digital Health is extremely important at this time. _It will simplify and accelerate the processes. _No shipping requirements and fits specially in countries in bad economic situation. _Higher efficiency ,Lower time _ More job opportunities never forgot: dream big you can fly environmentally friendly 3d printing materil we are using are almost 100% (disintegratable,and recycable). (good for nature).In addition that we have other backup plans. Mentors and sponsors thank you for the great effort you are putting without these competitions we where not able to progress so far. Built With angular.js firebase java www.superokk.com Try it out www.superokk.com drive.google.com drive.google.com drive.google.com drive.google.com
superokk Health
website link between 3d printer owners and people who need 3d prints
[]
[]
['angular.js', 'firebase', 'java', 'www.superokk.com']
4
9,986
https://devpost.com/software/twentyfour-helping-community-mental-health-the-homeless
Inspiration k What it does How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for TwentyFour (Helping Community Mental Health + the Homeless) Built With android-studio circleci java kotlin Try it out github.com
TwentyFour (Helping Community Mental Health + the Homeless)
An app helping our community reduce its mental stress and boredom through fun activities with attractive rewards for themselves and the homeless.
['Eisha Peyyeti']
[]
['android-studio', 'circleci', 'java', 'kotlin']
5
9,986
https://devpost.com/software/cross-border-driver-monitoring-using-interactive-chatbot
Inspiration The COVID-19 outbreak and the resulting social distancing recommendations and related restrictions have led to numerous short-term changes in economic and social activity around the world. According to the International Labour Organization, as reported on the UN news site, the risks of food insecurity are now emerging because of containment measures, including border closures implemented by governments. While it is important for governments to ensure a reduction in the risks of inter-border infections, it is equally critical to ensure that the food supply chain is not disrupted. We have come up with a solution that implements cross-border driver monitoring to address the issue of food supply-chain disruption. The success of our solution is, however, dependent on the collaboration between governments at all levels, and the private sector as part of a multi-pronged approach to fight COVID-19. Ensuring that the movement of food and other important materials can move across state borders safely, would require public health departments and the transport unions to: • Test, Register and approve all inter-state truck drivers and their staff for each trip. This will require rapid testing methods for COVID-19. • Conduct periodic tests on all interstate drivers and update the records database. • Educate all drivers on COVID-19 and how to conduct themselves while in transit. • Use our software to monitor interstate drivers at state borders to verify from the digital travel records if the drivers have been approved for the ongoing trip. Cross-border driver monitoring helps governments reduce transmission and ensure the food supply chain is not disrupted. This will mitigate the collateral impacts on children, women, and vulnerable populations. That’s why we recommend that adopting and deploying cross-border driver monitoring system is a key milestone to reopen economies. What it does Government workers shall need to test and certify all drivers and their staff who are part of the food supply chain and who embark on interstate travel. The government workers shall be required to update the details of the truck and the identity of the driver into a central database. The government workers shall need to train the drivers and their support staff on self-isolation techniques while in transit. At all state borders, state health inspectors shall ensure all trucks carrying essential goods and the occupants are in the database. If they are not, passage shall be denied. State inspectors shall update entry logs for all truck that have successfully completed their trips. How we built it We used the python framework for developing the APIs, data storage was in Redis Data Server. We deployed the API on google app engine. We used Dialogflow which is a natural language understanding platform used to design and integrate a conversational user interface into the bot Challenges we ran into We realised that internet access may be a challenge in some locations in the country. Therefore, we could access the application via USSD technology Accomplishments that we're proud of We realised that with our passion for programming, we are able to provide solutions that can mitigate food insecurity and national poverty. What we learned We learnt that the successful deployment of technology would require interagency collaboration. What's next for Cross-Border Driver Monitoring Using Interactive Chatbot Develop and interactive interface using USSD Built With app-engine flask google-dialogflow python redis telegram Try it out telegram.me github.com
Cross-Border Driver Monitoring Using Interactive Chatbot
Ensuring Food Security in the Covid 19 Pandemic era
['Iorwuese Wisdom Mzer', 'Samuel Terungwa Mzer']
[]
['app-engine', 'flask', 'google-dialogflow', 'python', 'redis', 'telegram']
6
9,986
https://devpost.com/software/no-food-waste-ifvjcz
Inspiration: I was inspired to do this project one day when I was at a grocery store with my dad. I was heading to pick some yogurt cups when I found out the ones I had picked were over a week expired. This got me thinking; how much food do supermarkets waste? After some research, I found out that supermarkets waste about 43 billion pounds of food, and that's just in the United States. For this reason, I wanted to build an app that was able to alert supermarkets about near expiry food, allowing them to donate it to food banks or needy people. Also, I wanted to build the app so that households could prioritize items that expire soon and consume them rather than letting them expire and go to waste. Additionally, I made a website so that local food banks and supermarkets could communicate (by submitting forms) and be paired to ease the process of food donations. One of the problems that many people face during this pandemic is not having or being able to afford food. The supply food banks have are getting exhausted with the increased number of people who need food, and food banks desperately need more food to give to people. I believe that with this app, not only will we be able to cut down on food wastage, we will also be able to provide this food to those who need it the most. What it does: Supermarkets: When people shelf items, they can scan the item's barcode with my app, enter the item's expiry date, and the quantity of the same item. With this, the app stores all the items into a product list, where it stores the UPC (barcode), date of expiry, the product's name (which it retrieves from the barcode), the product's quantity, and the product's image. Then, the supermarket managers can run a query on the product list, entering a date, and it will return all of the products that expire either on or before that date. Supermarkets would be okay with donating these foods as consumers would not want to buy near expired products and it is better for them to be consumed by the needy and hungry rather than letting them become expired and ending up in the landfill. Also, if many supermarkets use this app, the government could recognize their efforts and give tax breaks as well. Finally, all supermarkets using this app will gain a lot of publicity for their work to reduce food waste, and will also be featured on my website. Households: In America, households waste approximately 31.9% of the food they obtain. By using the app, households will be able to scan all of the items that enter their house and run expiry reports in order to learn what products are expiring soon. With this knowledge, they can prioritize these foods for consumption, cutting down on their food wastage and saving them money in the process, as they will buy less food from supermarkets. The less food that they buy from supermarkets, the more food that can be donated to those who need it the most. How I built it: The app uses a barcode API, from which it can obtain the product's name and image. The website was made using WIX and helps food banks and supermarkets connect to start the food donation process. Challenges I ran into: The main challenges I ran in to were querying the product list, as it required a lot of logical thinking and trial and error. Accomplishments that I'm proud of: Some accomplishments I am proud of was creating an app where I learned and used a lot of new functions I have never tried before and integrated an API into my app. Also, I have never built a website using WIX before, and I was very happy to see how it had come out. What I learned: I learned how to create advanced apps and also how to create websites (with the help of WIX). What's next for No Food Waste: Through funding, I can purchase the premium version of the barcode API, as right now, it only allows 100 searches a day per device. Also, I can create a more user-friendly platform with better UI and integrate more query features, but time constraints did not allow me to do so. I can definitely make the website more professional (with funding I can upgrade my account and buy my own domain) and can also make this app available on iOS devices. What have I done during this hackathon: I have had the idea for this project for some time, and recently created an app, but this app was completely not functional and had a terrible UI. Also, it didn't have many of the components I planned for it to have. For this reason, I created a whole new app and a website to go along with in order to implement all of the functions I desired my solution to have. Built With barcode-scanner upcitemdb wix Try it out nofoodwaste.wixsite.com
No Food Waste
My idea is to provide an app and website with which supermarkets and households can use in order to eliminate food waste by consuming near expiry items or donating them to food banks or needy people.
['Sarang Goel']
[]
['barcode-scanner', 'upcitemdb', 'wix']
7
9,986
https://devpost.com/software/resumatch-co6bua
Our video forgot to include this, but clicking on the Job Title shows the LinkedIn page. This is the architecture of our project. Another logo design. They're all so beautiful! And another one! (Note: Please look at the first image we've uploaded alongside the video for a feature we missed in the demo. Inspiration Although the new cases of COVID-19 are terrible, our team noticed that the swathes of unemployed persons in the United States affected by quarantine parallels the Great Depression of the 1920s and might take years to fix. The people affected by job loss aren't usually white-collar workers: fast food employees, cashiers, and mall workers are being put out of their jobs by the millions. However, the one shining beacon is that these workers are highly adaptable and can fit many job descriptions! We realized that the best way to find new, compatible jobs for these workers is by analyzing soft skills in their resume that they've gained through experience. What it does Our application allows users to simply drag-n-drop their .pdf resume onto our site. From there, our NLP model will tag its soft skills, search for jobs that use those skills in a massive dataset of jobs and Google Jobs, and then recommend those jobs to those employees. How we built it The following summarizes the steps taken to provide smart job recommendations to applicants based on their uploaded resume: When the resume is uploaded onto our website as a PDF, we extract relevant information from the resume, such as the applicant’s skills and experience and parse them as text. We trained a Bidirectional Long Short Term Memory Network (BiLSTM) from scratch to categorize job descriptions into job titles like “business analyst” and “accountant”. With this model, we then predict the top 5 job titles that the skills and experiences listed in the resume is likely to fall under. This helps reduce the number of requests we have to make to the LinkedIn API, while still broadening the options of the applicant and not restricting them to a single job title. We query our MongoDB to see if we have stored the job listings for each of the 5 job titles in our database. If not, we make a request to the LinkedIn API to get job postings that are relevant to the top 5 job titles found. With the job postings, we use a state-of-the-art Natural Language Processing model, the Universal Sentence Encoder, to encode the job descriptions into high dimensional vectors and capture the nuances behind each word in the description. We also encode the resume information into high dimensional vectors and use cosine similarity to measure the similarity of the applicant’s resume with the job description. We return the top 10 most similar job postings that match the applicant’s skill sets. Impact of our Project We hope our project can help ease people back into jobs following the COVID-19 pandemic by providing them with more avenues to look for jobs. We hope this will simplify the job seeking process for them and open them up to more opportunities, both in terms of finding new suitable job titles and also finding more relevant job postings that are suited to their individual experiences and soft skills through the power of machine learning. Challenges we ran into Finding a suitable dataset -- datasets containing job postings and resumes are difficult to find as they are not commonly used in machine learning. Furthermore, the dataset we found was noisy so we had to do data preprocessing to clean it. Accomplishments that we're proud of What we lacked on the frontend, we made up heavily on the backend with not one but two machine learning models! We were surprised to see that our BiLSTM model worked really well despite the limited training data, achieving over 90% accuracy on the test set. What we learned Our team had to do a lot of research on the structure of .pdf files in order to extract _ relevant _ information from it. What's next for ResuMatch Our website is live at https://resumatch.online/ ! We plan to collect more data to improve the classification of the job titles and increase the range of job titles available. Built With flask machine-learning mongodb natural-language-processing react tensorflow Try it out resumatch.vercel.app github.com
ResuMatch
Helping the unemployed find work during the COVID19 Pandemic using NLP! A simple resume drop is all you need.
['Borna Sadeghi', 'Haohui Liu', 'Ansh Gupta', 'Eric Andrechek']
['Best use of MongoDB Atlas', 'Best COVID-19 Hack', '1st Place: Charity Donation']
['flask', 'machine-learning', 'mongodb', 'natural-language-processing', 'react', 'tensorflow']
8
9,986
https://devpost.com/software/tutor-team
login home page bottom home page top threads that display on the app questions inside the thread clicked..but wait there's none add a question to the thread! see all the newly added questions click on a question to find the answer..but wait there's none add an answer to the question! see all answers displayed for that question Create an account to find others to interact with! login to your account Inspiration We noticed that since we are no longer at school, it's a lot harder to ask questions of classmates or get tutored in topics that we're struggling in. To solve that problem, we created Tutor Team, an app that will allow students to ask and answer questions, plus video call with other students for tutoring sessions. What it does Tutor Team is an app that lets students give help in subjects they are strong in, and get help in subjects they are struggling in. Students are able to follow classes that they are taking at school (like biology or english) and ask questions relating to those classes. That way, students who are in the same class will be able to see their questions and answer them. If students want more personalized help, they can request a tutoring session, and students in that class will be notified that someone is asking for help. If they want to, they can call the student asking for help and tutor them. How we built it Tutor Team is built using Flutter, node.js, and mySQL. Challenges we ran into Initially, we were going to use MongoDB, but we had to switch to mySQL since MongoDB deals more with unstructured data, and the whole point of the database was to store threads. It makes the most sense to store threads as structured data, so we had to scrap all the previous work and use mySQL. Accomplishments that we're proud of A lot of us are high school students, and though our app isn't fully functional, we're proud that we were able to work together and get this project started. All of us started at different levels, and we all progressed from where we started. What we learned Taylor: I learned a lot about app development. I've done a little bit of flutter before, but this required a lot of reading documentation and watching flutter tutorials. Mostly I've had experience with web development in the past, so I'm really glad that I can add this to my toolbox. Kaela: I learned so much about different ways to create an app. I'm pretty new to programming, so I didn't even know things like flutter or node existed. It was nice to be presented new resources to use in future projects. I also learned a bit more about design, such as how to make a recognizable icon/logo. Yonden: I learned a ton about backend development. I learned about how to structure full stack projects, hashing passwords and authentication, dbms technologies like mongoDB and also mySQL. I learned how to query data, work with it (json data), and a ton about servers and clients, how to build a rest api, and CRUD operations. I learned what the difference is between relational databases vs non-relational databases. I learned more about node frameworks like express, and other things like middleware. I learned how to debug problems more efficiently through software like postman to test my api's endpoints. In addition to all this I learned how to read documentation better, and get the things I need and sift through the rest. I think best of all though is I learned how to collaborate with a team, come up with an idea, and execute it as best as possible. What's next for Tutor Team Right now, we have a lot of back end for the app, so the next step for us is to expand our front end. We don't really have a lot connecting the two. This is mostly because of database hosting issues. After that, we hope to add video call functionality. Built With flutter mysql node.js Try it out github.com
Tutor Team
The place to give and get homework help during quarantine.
['Taylor Ziegler', 'Alayna Nguyen', 'K.D Dotiwalla', 'Kaela Brunner', 'Yacine abdelouhab']
[]
['flutter', 'mysql', 'node.js']
9
9,986
https://devpost.com/software/maskvi
MaskVi Demo Screenshot Demo Video https://youtu.be/S0bw1w5RFR0 PLEASE NOTE BEFORE RUNNING PROGRAM Please use OpenCV 3.x.x . This is because one of the classifiers is not compatible with the newer versions. Thank you! Inspiration We took inspiration from general facial recognition practices in computer science. By utilizing standard OpenCV feature detection, we were able to detect whether or not someone is wearing a mask based on what facial features the computer is able to detect. What it does Our software uses concepts from standard feature detection algorithms to try and find particular features that are indicative of whether one is wearing a mask or not. It firstly looks for the eyes in order to determine if a person is there or not. If it is able to find the eyes, it then looks for the mouth of the individual. If it is able to find a mouth, the software knows you are not wearing a mask and returns that no mask is present. However, if it is unable to find a mouth, it means that something is covering it and hence, assumes that you are wearing a mask and returns this. How I built it We created a simple feature detection program in Python using the OpenCV library and the Haar Cascade classifiers . The program uses a webcam to analyse each frame and identify various facial features (the eyes and mouth). Based on what facial features are detected, an output of whether or not a person is wearing a mask is returned in real time. Challenges I ran into One of the classifiers did not seem to work with the newer versions of OpenCV. We tried to get it to work but ended up deciding it was not worth the time and just used an older version of OpenCV. What I learned I was able to refine my knowledge of the OpenCV library. Built With opencv python Try it out github.com
MaskVi
An OpenCV software which automatically detect if an individual is wearing a mask or not
['Jeremy Jun-Ping Bird', 'Joshua Bird']
['Best App']
['opencv', 'python']
10
9,986
https://devpost.com/software/c-trac-app-for-tracking-corona-hotspots
Inspiration During this current COVID 19 pandemic, I see health worker is curing the patients, doctors are innovating new medicine, the police is controlling the crowd movement and even bus drivers are helping people to get back to home. As a future engineer, I felt like my contribution is none, so I felt motivated to do my part and try to bring a positive change and to make sure my product can also be used in a future pandemic. problem our project solves we all can agree that this pandemic needs to overs soon so as we can meet our loved ones, in order to contain this pandemic, the government is using CONTACT TRACING APPS ( CTAs ). Research says if contact tracing is done correctly it can reduce the number of case 3 folds, then why the number is still rising? , the problem with these CTAs is they only tell whether you have come in contact with an infected person or not, what it doesn't tell us that from where the person caught the infection ( the parent source ). let's take an example if there are two-person 'X' and 'Y' and Y got infected then X will get notified by the current CTAs that he might have got an infection as he came in contact with Y, but it doesn't tell from Which PLACE y got the infection, this is crucial as if we don,t find out that PLACE then many other people who had visited that PLACE may get the infection. what our project does Our project C-TRACK 1st of it's kind of reverse contact tracing app, so let me explain how Reverse contact tracing works, whenever the user visits a place too frequently like a shopping mall. then that particular location will be saved inside the app and if in future user found COVID +ve then we can track down that shopping mall, the app will also send a notification to all over people who have visited that exact shopping mall, now health authorities can sanitize and lockdown that specific shopping mall instead of locking down the whole locality. the location stored is fully encrypted and can only be accessed by the user. it also has two additional feature 1.) it sends 'wear mask ' notification when the user leaves then house and 'wash hand' notification when the user returns the house, this small precaution can bring a huge change by keeping you and everyone around you safe. 2.) whenever the user enters a government certificated hotspot or RedZone he will get a warning notification Challenges I ran into 1,) we lack financial support as we have to make this app from scratch. 2.) the problem in collecting data regarding government-certified hotspot and also we have to do a lot of research regarding the spread pattern of COVID-19. 3.) It was hard for us to get in contact with health workers as they were busy fighting an increasing number of patients so we have to talk to retired doctors. 4.) It took us too long to use it in real-time as during lockdown it was too hard to go outside in the quarantine but finally, after lockdown loosens a bit we tested it and it gave an excellent result. What I learned All team members of C-TRACK were able to grow their area of competence by participating in the whole process of idea definition, market research, validation, prototyping, and presentation. Through different mentor sessions, we learned that problems could be approached by many means, but most importantly our mission should be clear. What's next for C - TRAC App for tracking corona hotspots our app can be used for a future pandemic or seasonal diseases such as swine flu or bird flu. Built With android android-studio java
C - TRACK 1st ever reverse contact tracing App
Our app is 1st reverse contact tracing app which locate the possible hotspots from the user location history and also 1st safety awareness system which notify user to ( wear mask ) and ( wash hand).
['Anup Paikaray', 'Arnab Paikaray']
[]
['android', 'android-studio', 'java']
11
9,986
https://devpost.com/software/health-bot
Inspiration In our day to day lives, we often overlook our mental health, physical health, emotional well-being or all three at once. Even when we decide to invest more time in ourselves, we don't know where to start. So we made a website where you can find all the information in one place. We have sections for Home Workouts, Outdoor Workout (to keep you physically fit), Mental Health (for filling you up with positive thoughts), Healthy Eating (to keep your immune system in good shape) and Connecting with Loved ones (so you always have someone to talk to). How we built it We used wix to design the site. We used UiPath to scrape the video links from youtube and populate the database. The Google action for Google Assistant was made using google action console and Dialogflow. The Google action allows you to get workouts and tips from your smartphone, Google Home or any device with google assistant in it. The outdoor workouts will certainly make you feel more connected with nature 😉 Challenges we ran into We struggled with creating follow up intents with dialogflow and getting the google action to work. Accomplishments that we're proud of Learning how to use Dialogflow was a great accomplishment for us. Also this was our first time making a website so we are pretty proud of how the UI turned out to be. What we learned Making google actions, creating websites using wix, populating data using UiPath What's next for Health Bot Adding a whole variety of workouts and making a skill for Alexa too. Built With dialogflow google-actions uipath wix Try it out maggiefloat.wixsite.com
Health Bot
Taking care of your mental and physical well-being
['Maggie Hou', 'Julia Ma', 'Jatin Dehmiwal', 'Victor Yau']
['Best UiPath Automation Hack']
['dialogflow', 'google-actions', 'uipath', 'wix']
12
9,986
https://devpost.com/software/teens-against-covid
GroupsPage Desktop ChartsPage Desktop ChartsPage Mobile RegisterPage Mobile MainPage Mobile Adding an exercise (Dialog form) MainPage Desktop Inspiration Before the pandemic, we really enjoyed working out in our college gym. In early March Trinity College Dublin was closed for students and the education continued online. Moreover, all the sports facilities were closed. Since we could not meet in person in the gym to motivate each other, we decided that we should create a tool for sharing our workouts. Now that some of us left Ireland to go back to their home countries, because of COVID-19, we understood that participating in "Teens against Covid-19" is a perfect opportunity to showcase our skills and at the same time support each other in such uncertain times. Furthermore, we have conducted a survey among our classmates to find out whether they experienced problems similar to what we faced. Those students who used to exercise in the gym or take part in sports clubs have claimed that they have lost motivation to work out because they do not feel the healthy competition with their mates, relatives, and neighbors, which they used to experience almost every day before the virus spread. Our project aims to help people stay healthy and fit during the quarantine, which is a very important goal, as our mental health is directly dependent on our physical health. What it does link The idea of our project is to keep yourself and your friends motivated by posting what exercises you have done. The user has an opportunity to add more than 15 types of exercises to their profile. They can create groups to share their accomplishments as well as join the already existing groups. In addition, users can see the comparison between their performance and the performance of the other group members. Finally, the app can be accessed both by PC/laptop and smartphone, which makes it easy to use link . How we built it We used various technologies in the production of our project such as the Vaadin framework, Spring framework, MongoDB, Heroku Cloud. The main programming language for that was Java and we also used a bit of CSS to style our Vaadin UI components. We started developing our project on the 18th of May and finished on the 25th. There were 7 students on our team: 6 developers and 1 manager. Challenges we ran into The main challenge was learning a lot of new technologies. During our first year in college, we learned the basics of OOP in Java, so encountering the real frameworks was not that easy for us. However, we have managed to overcome all the difficulties in development by setting the specific team roles (backend, frontend) and by spending most of our free time on acquiring new knowledge. Furthermore, there were 6 developers on our team, so we needed to manage the code we wrote. We used the GitHub repository link for this as well as we had daily team meetings. Accomplishments that we're proud of The most important accomplishment for us is applying what we learned in college to solve a real-world problem. The software we created has a full potential to benefit our community and we are proud to present it here. In addition, we managed to learn a lot of new things in a short period of time. Finally, this is our first hackathon together and we are very happy about how we organized our team. What we learned We learned a Java web framework called Vaadin. Moreover, we had to learn Spring Security and Spring Data and integrate them with a Vaadin application. We used MongoDB to store information about users, which was a problem at first since we were all used to SQL databases. Finally, we deployed our application using Heroku cloud platform. We found out what continuous integration really means. What's next for Fit Together We are planning to create a full-fledged social network out of the prototype application we developed. We are going to add messages, posts, and friend lists. Another important step is to find support from people and governments from all over the world so that our project and can help people fight the pandemic. Finally, we intend to add more comparison features to our project as well as refactor our code. We think that this web app will help to get people motivated to exercise and stay healthy during the pandemic and after it ends Built With css heroku java mongodb spring vaadin Try it out fit-together.herokuapp.com github.com drive.google.com
Fit Together
People are experiencing lack of motivation to do sports because of the pandemic. FitTogether is a web app made to help them stay fit and healthy during quarantine by motivating each other.
['Pavel Petrukhin', 'Vitali Borsak', 'Cian Jinks', 'thompsm12', 'Shohinabonu Shamshodova', 'Anton Tiscovschi', 'CSAjchan Mamedov']
[]
['css', 'heroku', 'java', 'mongodb', 'spring', 'vaadin']
13
9,986
https://devpost.com/software/digital-roads
Adventure Yay Games Coming Soon Logo Journal Mimi Marla Museums Pictionary Stores Trivia Zoo Inspiration After the initial shock of the COVID-19 shutdown, we started to notice the toll it had taken on our Grandparents. While we were still able to stay in contact with our teachers and friends through technology, we found our Grandparents to be quite isolated and frightened by the situation. While our Grandparents have always supported us, we thought it was time for us to step up and to try to help them feel connected to other people and give them something to look forward to. We started by setting up a Zoom call with our Grandparents and “visiting” a zoo where we would screen share the live webcams from the San Diego Zoo. They had so much fun seeing the animals and sharing stories with us, that we made it a weekly adventure! As the shutdown continued and news worsened, we decided to add another weekly family activity. We decided that since we were fighting, why don't we add dragons and trolls to the mix! So we began our weekly D&D adventure with our Grandparents and family. While time consuming and clumsy at times, as this was our first time playing, our Grandparents started off as observers to the game. However, our Grandpa leveled up to be a brave dwarf, Tor. Weekly D&D has expanded to an additional game night to play Pictionary. Finally, one of the best side activities we have done is instead of our zoo visit one week, we instead looked through the Chicago LIFE collection on Google Photos. Scrolling through the pictures as a family, Grandma and Grandpa were able to reminisce and share stories from growing up around Chicago. While there is some work involved with planning these activities, it has been the best part of this shutdown. We have experienced so much joy helping our Grandparents, we wanted to make a website that would make it easier for people of different generations to connect and spend quality time together at a safe distance. What it does The website allows the user to find different zoos, museums, games, and more to virtually visit on their own or with other people. Once finished, there will be sections dedicated to virtual visits, virtual games, and journaling. How we built it We wrote the code in sublime text and used html, css, javascript, jquery, and Google Maps API to build the website. Challenges we ran into The most difficult part of this was figuring out how to use the Google Maps API. We wanted to use this API to have the Google Map be on one side of the page and links to the zoos/museums on the other side. This took a lot of research and a lot of trial and error to complete, but we did not give up and it finally worked. Accomplishments that we're proud of We are most proud of figuring out how to use the google maps API in the way we envisioned. Specifically, it felt the most rewarding to be able to hover over one of the zoo/museum's logos and have the Google Maps marker have a bounce animation. The Google Map was something we both wanted in the website from the beginning because we wanted to try to show people just how far everyone can "travel" the world despite being in quarantine. What we learned We learned a lot about coding with HTML, CSS, javascript, and jquery. We have done some work with these languages in previous hackathons, but those had a shorter timeframe so it was difficult to retain the information. This time we were able to spend time learning the languages and it made the coding much smoother. What's next for Digital Roads Pertaining to the 'Adventures' category we plan to add much more virtual zoos and museums. Additionally, for the 'stores' section we are going to help spread awareness about smaller business' in one hub. For the 'Games' category we would like to add a way for people to play together and we will add more games. We plan to incorporate a 'join server' or 'create lobby' option for people to either play with family/friends from a Zoom call or find new friends by joining a server. In the 'Journal' section, we are going to add a way for people to journal their feelings about COVID-19 and have the option to share with others or save it to be a memoir for themselves. We plan to add a 'remember' section where people can look at old photos of different locations around the world to serve as a conversation starter for family/friends on Zoom calls. We hope that Digital Roads can serve as a hub for people of different generations to come together despite social distancing and be connected. Built With css google-maps html javascript jquery Try it out github.com
Digital Roads
We made a website that makes it easier for people of different generations to connect and spend quality time together at a safe distance.
['Lily Dolph', 'Tessa Dolph']
[]
['css', 'google-maps', 'html', 'javascript', 'jquery']
14
9,986
https://devpost.com/software/covid-19-testing
Inspiration To decrease the death rate due to COVID-19 What it doesIt generally send the information of patient to the nearby hospitals. How I built it -I built this with LanbotIo Challenges I ran into To decrease the work of government and doctors. Accomplishments that I'm proud of I built This in 5 days What I learned Everything is possible by hardworking. What's next for COVID-19 TESTING Still Making my application simpler. Try it out covid-19testing.000webhostapp.com
COVID-19 TESTING
An web application which will decrease the no'of COVID-19 patients by sending the details of patients to the hospital.
['Hrithik Kumar']
[]
[]
15