text
stringlengths 0
2.12k
|
---|
I will gladly wait for the day that the skies part and the AI-gods descend from the heavens with cures for our mortal problems. But until then, the role of AI in fighting COVID-19 is at least a big step toward accelerating the AI health care revolution. |
According to a 2019 study, the global AI health care market is expected to grow at a compound annual growth rate of 41.7 percent, from $1.3 billion in 2018 to $13 billion in 2025. Hospital workflow, wearables, medical imaging and diagnosis, therapy planning, virtual assistants, and drug discovery all promise to be transformed with the introduction of AI. COVID-19 will only expedite those trends. |
It’s easy to turn to AI with our questions, but important to remember that it may not always have the answers. Perhaps by that $13 billion valuation in 2025, we will be able to kick back and watch AI tackle the next pandemic for us. |
We are a crowd of imperfect thinkers with a tendency to pick sides and choose favorites. Unfortunately, imperfect thinking often leads to an imperfect reality. The recently invigorated Black Lives Matter movement has exposed many institutions and social structures pervaded by bias. As calls for reform swell, we must consider how our technology is influenced by bias too. In the context of artificial intelligence (AI), the important discussion becomes: how do human biases manifest in the AI we create and what can we do to fix it? |
To begin answering this question, we turn to Detroit’s facial recognition program. On a January afternoon earlier this year, Detroit police were investigating the theft of five watches from a Shinola retail store. Detectives pulled grainy security footage of their suspect, ran it through the city’s facial recognition software, and the AI returned a hit: 42-year-old Robert Julian-Borchak Williams. Police promptly arrived at Williams’ home and handcuffed him in front of his distraught wife and two daughters. They were given no more explanation than the words “felony warrant” and “larceny.” At the station, Williams was led to an interrogation room and shown three photos: two from the surveillance camera and one of Williams’ driver’s license. Williams looked the photos over and shook his head incredulously. Holding the surveillance photos up to his own face, Williams scoffed, “I hope you don’t think all Black people look alike.” Williams was detained for 30 hours and then released on bail before charges against him were dropped as a result of insufficient evidence. |
How do we know for certain that Williams was not the thief? The alleged suspect in the security camera footage was wearing a St. Louis Cardinals hat. Williams, a Detroit native, said he would “under no circumstances” rep Cardinals merchandise. |
|
There must be someone to blame for Williams’ false accusation. Initially, the Detroit Police Department (DPD) seems a likely target. But the DPD was only complicit in racism; the root of the bias stems from the facial recognition system itself. Somehow, the program did not produce correct output and misidentified Williams’ face as that of another Black man. If these errors were consistent across demographics, we could speculate that the AI was simply not sufficiently developed for use in law enforcement. Unfortunately, that’s not the case. A 2018 joint study between Microsoft and MIT’s Media Lab reported that across three commercial facial recognition systems, error rates were 10 to 20 percent greater for darker subjects than lighter subjects (positive predictive values for lighter male subjects were all above 99 percent). The numbers were worse for dark-skinned women, who were misidentified as men 31 percent of the time. Bias is characterized by this deliberate discrimination towards a population. It poses an imminent threat to AI’s integrity because bias often exists beyond physical code. As a result, you can know that an AI system is producing biased output without any semblance of why it’s doing so. |
AI biases leave us at a crossroads: if we decide to trust AI and its outputs, we may end up reinforcing biases and unconsciously discriminating against marginalized populations; if we decide to not trust AI, we may be abandoning a technology with revolutionary potential. Both of these options are less than ideal. The better choice is to attack bias itself. To do so, it’s important to understand the mechanics of how biases can creep into AI systems in the first place. |
AI systems learn to make decisions based on training data. Inputs are fed into the system, the AI returns an output, and then internally adjusts how information is connected and weighted according to the margin of error between input and output. Training data often comes from the material world — be it puppy photos, applicant resumes, or historical crime data — and, unfortunately, our world is rampant with bias. When training data contaminated with bias is given to an AI as input, the AI will reflect and intensify that bias in its output as it learns to minimize error. In the case of facial recognition software, there is disparity in accuracy among race and gender because light-skinned men comprise the largest fraction of image datasets, while dark-skinned women are the least photographed demographic. Robert Julian-Borchak Williams was falsely accused because there are not enough pictures of Black people among the largest commercial datasets. |
AI can exploit biases based on what information it’s designed to consider and prioritize. Say a ride-share service wants to develop an AI model to predict how willing a passenger would be to pay a certain premium. To define this goal on a computational level, the company needs to specify whether they want to maximize profit margins or maximize the number of rides given. The AI, however, is designed to make decisions for business rather than discrimination or fairness. If the AI learned that giving out rides to only wealthy passengers willing to pay steep premiums was an effective way to maximize profit, it would ostracize lower-income riders, even though that may not have been the company’s intention. Biases can inadvertently arise when an AI latches onto sensitive or meaningless information to make decisions; it’s the AI programmer’s responsibility to prioritize the information that will produce equitable software. |
No matter how biases surface in AI, the problem doesn’t lie with the technology itself. A microprocessor cannot be intrinsically biased; instead, the problem is human. AI is built and trained on human intuition, and if our conventions and institutions are biased, then the outputs of our AI will be too. Until we eliminate or fully recognize our biases, they will be perpetuated and magnified by the technology that we create. It may not be possible to have an unbiased human, so it may not be possible to build an unbiased AI, but we can certainly do better. As Rep. Alexandria Ocasio-Cortez put it: “If you don’t fix the bias, then you’re just automating the bias.” |
The challenge of bias isn’t a reason to stop investing in AI or burden developers with regulations. It just requires attention and effort. I see three essential steps to diminishing AI biases: |
First, there should be a standardized definition of what a bias-less AI looks like. One promising approach is “counterfactual fairness,” which holds that “a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group.” If a credit card company’s lending algorithm gives a subprime loan to a Black woman, a White man with the same credit score should get an identical loan; if not, we know that the algorithm is making decisions under the influence of bias. |
Second, AI should be deployed using responsible business practices to mitigate bias. AI systems often fall into a “portability trap:” the model is designed to be used for different tasks under different circumstances, but in doing so ignores a lot of social context. Leaders must understand the social foundations that they are working with before thinking about software. Should a task exist in the first place? Does it matter who creates it? Who will deploy it and on which population? Helpful frameworks for recommended processes and technical tools include Google AI’s Responsible Practice Portfolio and IBM’s Fairness 360. The Alan Turing Institute’s Fairness, Transparency, and Privacy Group is another great resource to stay up to date on AI biases. |
Third, and most importantly, we need to tackle human biases (easier said than done). In line with the ideals of the Black Lives Matter movement, we must engage in discussions about human biases, neutralize their effects, and counteract their prejudice. When we identify a bias buried within an AI, it’s not enough to just fix an algorithm; we must also fix the human biases underlying it. Diversifying the field of AI itself is a means to this end. AI researchers are quite homogeneous: primarily males, of particular racial demographics, without disabilities, who grew up in high socioeconomic areas. If AI creators come from diverse backgrounds and are acutely aware of discriminatory practices, AI systems will likely become less biased. |
With great power comes great responsibility, and AI is no exception. AI biases are highly problematic. However, we are currently experiencing a time of global recognition and reform. Now, more than ever, we must not forget that our most powerful technologies are susceptible to our basic human weaknesses. |
Auto retailer EchoPark wanted to maintain its competitive edge in the online market. That meant staying ahead of emerging trends like the movement of online retail from a 2-D experience to 3-D immersion in the metaverse. EchoPark approached our Intelligent Automation technology consulting team at EY with two main questions: (1) does it make sense for us to expand our business into the metaverse and (2) if yes, what would our presence look like? The team assigned me to singlehandedly run this project on a five-week timeline. |
The Solution |
First, I had to figure out what the metaverse means for the future of retail. I set up meetings with metaverse experts and identified comparable retail strategies that other companies have adopted to expand into the metaverse. I learned that although the metaverse is not yet most peoples' first stop for shopping, big brands like Nike, Adidas, Louis Vuitton, Ferrari, and many others are pursuing promotional retail in the metaverse and are laying the foundations for full-scale e-commerce platforms. The global metaverse market size is expected to grow at a compound annual growth rate of 39.4% – 47.2% between now and 2030, signaling that as virtual reality and 3-D digital marketplaces become more democratized, people are going to prioritize virtual ease over in-person experience. If EchoPark could get a foothold in the metaverse now, they would have a huge competitive edge by the end of the decade. |
The Design |
Next, I needed to frame the business case: what sort of commerce would EchoPark conduct in the metaverse? After weighing different options like a virtual help center and a test driving course, it became clear that a virtual dealership would be the best use case of the metaverse for EchoPark. The real advantage of the metaverse is its ability to transcend space. I mean this in two different ways. One, we can interact with objects – in this case, cars – in 3-D while only looking at a 2-D screen. Two, even if a car model exists only in EchoPark Baltimore and the customer is in San Francisco, the user can jump on EchoPark's metaverse platform from their couch or from the San Francisco EchoPark dealership and interact with the car as if they were in Baltimore. So, I needed to build a model of a 3-D dealership that allows users to efficiently interact with digital 3-D vehicles and have those vehicles reflect EchoPark's global inventory. All that, plus maintaining other features of EchoPark's online experience like e-commerce and search capabilities. |
The Build |
I was ready to begin building a proof-of-concept. I researched different 3-D modeling platforms and settled on Unity because of its ability to easily accommodate first-person exploration. I took a week to teach myself the program and its scripting language C#, neither of which I had worked with before. I then had four weeks to build the model. I began by thinking through how users will want to interact with the platform – specifically, what sort of contextual information they will come to the dealership with and what will require further signification, explanation, etc. I retained as much of the real-life EchoPark as possible to maximize contextual knowledge, modeling the building after a real showroom. I designed a side-by-side search and compare tool to leverage the metaverse's space-transcending nature, in which users can browse all cars in EchoPark's inventory and select 3-D models to explore and assess simultaneously. I carried over features of the website like their check-out pipeline and made the design as lifelike as possible to accentuate the immersive experience. I also left room for EchoPark virtual agents to inhabit the space and help customers in a full-scale model. |
The Results |
The results of the project can be seen below. There is a complete demo below along with a gallery filled with still frames of different aspects of the model. You can view the basic C# scripts I wrote here on my GitHub. I presented a live demo to EY's global metaverse team and they were impressed enough that they contracted my team for another metaverse project on the spot. Before my five weeks were up, I documented the project and left it with my team to make the final touches and present it to our clients at EchoPark. |
The Challenge |
Early in 2021, nonprofit consulting firm Marts & Lundy asked me to help them answer a simple question: how can we predict major donors in higher education? Their clients – colleges and universities – wanted to know which alumni they should target with fundraising campaign resources. As the only intern at the company, I was responsible for running the entire project myself. Over the next nine months, I developed a neural network classifier trained on more than 1.5 million datapoints with over 140,000 network parameters. In the end, the classifier predicts alumni donations with 76% accuracy overall and predicts 95% of alumni donations within $250. |
Data Acquisition |
The first step in my creative process was to find accurate data from which I could pull insights. Marts & Lundy was unable to provide me with industry data because their client's alumni data – from Dartmouth College to the University of Texas to the University of Buffalo – was sensitive and protected. So, the onus was on me to secure reliable training data. I found a Dartmouth alumni through the Dartmouth Career Network who owns a data-holding company, Advizor. He was kind enough to lend me a dataset detailing philanthropy in higher education. The dataset is composed of 65,000+ donors (and non-donors) with 23 data points per entity, summing to 1.5M+ data points in total. |
I elected to use a supervised neural network because forecasting major donors required making predictions (as opposed to identifying patterns, in which case I would have built an unsupervised network instead). Doing so meant that I needed to extract some sort of labeled output from the dataset. One of the categories – “total commitment bin” – emerged as a likely candidate. This data point detailed a donor’s total commitments over the last five years, separated into 12 donation brackets ranging from $0 to $25K+. |
I scrubbed the dataset of unnecessary information like donors’ names, ID numbers, etc. and encoded qualitative data points, such as replacing “yes” and “no” with 1 and 0. To train the network without bias, I set a base rate for the number of data points in each category. There were significantly fewer $25K+ data points than any other class, but it was the major donations that I was particularly interested in. So, I whittled the dataset down such that each class had only as many data points as the $25K+ class; specifically, I pulled 272 random samples from each class for a total of 3264 donors. |
I separated the data into training and testing sets. Although it is common practice to use 80% of the data for training and 20% for testing, I decided to split the data 90-10. Network accuracy was more important to me than validation, and because the dataset had already been distilled so much, I wasn’t willing to sacrifice any more data points. So I took 30 samples from each class and designated them as the testing dataset. Lastly, I normalized the data to reduce inconsistencies and redundancy. |
With my data properly organized, I began programming the neural network itself. I used PyTorch’s artificial neural network package to implicitly handle most of the computation. I employed an extended perceptron to train the network’s weights using backpropagation. For my nonlinear activation function, I used ReLU – a piecewise linear function that outputs the input directly if positive, otherwise it outputs 0. The ReLU activation function is standard across most classifiers and worked significantly better than any of the other nonlinear activation functions I experimented with. |
I also decided to implement a custom nonlinear activation function of my own. Although the classifier was programmed to output one of 12 classes, some of those classes were not meaningfully distinctive. What was the point in classifying $1-$24 versus $25-$49? It was likely that there would not be any discernible patterns in the data that would distinguish donors in either category. So, I grouped like classes together according to the classifier’s original output. Instead of 12 classes, the classifier outputted into one of five classes: $0, $1-$999, $1K-$4.9K, $5K-$24.9K, $25K+. This drastically improved the network’s performance. |
There were still a number of network parameters to define. Namely, I had to decide on the number of training epochs, the learning rate, the training momentum, the batch size, the number of hidden layers, and the number of nodes in each hidden layer. I experimented with each variable in isolation and my findings are detailed below. |
The Challenge |
During the COVID-19 pandemic, my friend and I watched some of our favorite small businesses go under. Mom-and-Pop shops were short on cash and traditional bank loans carried too much liability. So, we created a platform to offer interest-free microloans to businesses that are financed by a pool of local community members who get discounts on future purchases in return. It's essentially GoFundMe meets Groupon. Before applying to startup incubators with our concept, we needed a tangible platform. I assumed the responsibility of building an attractive website to handle transactions, all within two weeks. |
The Design |
Our business model hinged on peoples' willingness to give money to their favorite small businesses in exchange for exclusive benefits. So, our platform needed to highlight the small businesses themselves. I designed it to be an interpersonal experience by including features like pictures of business owners and first-person quotes. The more that users resonated with the people behind the business, the more likely they would be to give. We also needed a transparent way of showing the financial health (or illness) of a small business. People shouldn't have to be financially literate to know if a business is doing well or not, so we needed some sort of visual cue to make this process transparent. I developed a risk-assessment scale that shows, using space and color, the financial status of a company. Lastly, I designed a UI for handling transactions; the UI shapes user behavior by only offering a limited set of donation options, further alleviating the need for a complex financial assessment. |
The Build |
I used React and JavaScript to build the platform's framework and functionality. I complemented those tools with components from material.io (which is my standard go-to). During the build phase, I decided to make the UI have more of an informal social media feel, which I accomplished using media cards. Since the model was not going to be deployed in the near future and I was on a tight timeline, I decided not to make a full backend, and instead focused my efforts on the UI. |
The Results |
A full demo and snippets of the website can be accessed below. You can also view the code behind the website here on my GitHub. We applied to YCombinator in winter 2020 and made it further than 90% of other applicants before being dropped from the incubator. Still, we were proud of the business we put together and I was particularly proud of the UI I programmed. We decided that college was more important than a flimsy business model, but are ready to deploy Agora if it is ever again needed. |
The Challenge |
When I became a Project Intern with Herrmann, the company's CEO offered me a project that he has been eyeing for a long time. Herrmann is what I call a "cognitive HR firm," which means that they provide tools and management to help companies leverage cognitive diversity. The project in question was a hypothetical tool that analyzes a written piece of text, detects what "thinking preference" that text favors, and makes writing recommendations based on the thinking preference of the writing's recipient. In short, the tool helps people cater their writing to a style favored by whoever is reading the writing. The CEO wanted me to build a proof-of-concept of the engine that powers the tool and create a UI to work alongside it. |
To analyze the text and make writing recommendations, I built a natural language classifier with IBM Watson. I created a dataset from internal company emails by pairing peoples' sent emails with their known thinking preferences. I then fed this data to the natural language classifier and trained it to >75% accuracy. Most of this code is Herrmann's IP (although written entirely by me), but what I am allowed to expose can be seen here on my GitHub. |
The part of the project that I want to highlight here is the UI I created to accompany the classifier. First and foremost, I wanted the tool to be accessible. Otherwise, what good is a tool if you forget you have it? So, I designed it as a Chrome browser extension that adds a widget to your Gmail. The widget appears in the compose dialog of an email each time a user goes to write. Clicking that widget opens up the SparkPlug's dialog (SparkPlug is the CEO's name for the project). In the dialog, I wanted users to be able to analyze the thinking preferences of their email recipients, whether it be single or multiple. I created an algorithm that generates a quadrilateral mapping of thinking preferences onto four-colored quadrants (one for each thinking preference). This type of display would be easy to understand and fluently interpreted by users of Herrmann's cognitive management system. After presenting my mock-ups to upper management, I got the green light to build. |
I programmed the UI using primarily JavaScript, along with packages like InboxSDK and material.io. I made the design flexible to accommodate however many email recipients a user had on the email. I also implemented a framework to control page navigation, from log-in to log-out. InboxSDK allowed for seamless integration with Gmail, granting access to data in the compose dialog from a third-party widget. There are plenty of other features not worth mentioning here, but if you would like to see the code behind the UI, it's located here on my GitHub. |
I had a 12-week timeline to onboard into the company, learn about the project, compile a dataset, train a classifier, build a UI, and offboard. I was able to complete everything on time that I set out to, except for hooking up the classifier to the UI. The project was still met with lots of excitement. I got the chance to present to the CEO and other C-suite members, who were impressed enough that they gave the go-ahead for turning my proof-of-concept into a full-scale build. |
The LISTEN Center provides resources to need in the Upper Valley. In early 2020, they asked each member of my architecture class to redesign the layout and aesthetic of their food pantry within a $1M budget. They requested a warm, welcoming environment for both food pantry customers and also LISTEN Center staff. Further, they wanted the space to offer additional resources like desktop computers, community bulletin boards, and built-in libraries. It became my mission to provide the LISTEN Center with a design worth pursuing. |
I first considered all the resources that I wanted to include in the space. The food pantry itself demanded a welcome/help desk, shelving, fridges and freezers, a food washing station, and a place to keep shopping carts. The staff also wanted a full kitchen and a demo area where they could show patrons new ways to use the foods they had in stock. Further, I decided to include a conference room for staff, a resource room for patrons, and a bathroom. I then partitioned the floorplan into different spaces, allowing myself to be guided by existing walls and support beams to minimize any need for demolition. I tried to control the flow of traffic within the space by creating aisles and turns that carry patrons in a common direction. With a wireframe blueprint done, I chose a color scheme of warm blue for comfort and bright orange for airiness and excitement. |
I built a 3D model of the space using Rhinoceros 3D (to scale). I adopted a few pre-fabbed models from online 3D warehouses, but 90% of the model's components were created by me. It was my first time using Rhino in a full-scale build, so the project took about 15 hours to bring to life. I first built the components and then applied textures to common objects to make them feel cohesive. About halfway through the project, the LISTEN Center staff came to our studio to give some brief feedback on everyone's models. They asked that I add more shelving, so I developed some additional shelving solutions. Once my rendering was done, I uploaded the model to Unity so that we could explore it in virtual reality. |
Some further images of the model are below. We welcomed our clients from the LISTEN Center in to interact with our models in virtual reality. When they put on the Oculus headset, they were able to move throughout the space as first-person characters. Although they did not say which floorplan and design they would like to use in their final renovation, I like to think it was mine. |
A college student’s ability to get the answers to a homework question depends on the structure of |
their social network – who they know and how they know them. While work in sociology |
traditionally focuses on features of in-person social networks, I am primarily interested in the |
impacts of online social networks. As students transition from loose-leaf to laptop screens, online |
social networks are becoming increasingly consequential. I surveyed 64 undergraduate students |
at Dartmouth College to assess the relationship between the structure of their social network, be |
it online or offline, and their academic performance. I found that online social networks share |
similar structural characteristics with offline social networks and that academic performance |
Subsets and Splits