Spaces:
Sleeping
A newer version of the Gradio SDK is available:
5.9.1
ο»Ώ# Risk Definitions and Normative Guidance
Disinformation
- π€₯ Generative AI models, like LLMs used for text generation/conversation or GANs for image generation, can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, given the model's tendency to output hallucinations. Such models can generate deceptive visuals, human-like textual content, music, or combined media that might seem genuine at first glance.
Always verify critical information from reliable and independent sources before drawing conclusions or making decisions based on AI-generated content.
Algorithmic Discrimination
- π€¬ Machine learning systems can inherit social and historical stereotypes from the data used to train them. Given these biases, models can be prone to produce toxic content, that is, text, images, videos, or comments, that is harmful, offensive, or detrimental to individuals, groups, or communities. Also, models that automate decision-making can have biases against certain groups, affecting people based on sensitive attributes in an unjust manner.
Human moderation and oversight must be used to prevent cases of algorithm discrimination produced by these systems.
Social Engineering
- π£ Generative models that can produce human-like content can be used by malicious actors to intentionally cause harm through social engineering techniques like phishing and large-scale fraud. Also, anthropomorphizing AI models can lead to unrealistic expectations and a lack of understanding of the limitations and capabilities of the technology.
Efforts must be made to differentiate human-generated content from AI-generated content, either by policy regulations or technical solutions that guarantee the verification and ownership of digital media.
Malware Development
- π±βπ€ Code generation tools can accelerate malware development, enabling malicious actors to launch more sophisticated and effective cyberattacks. These tools may also help lower the intellectual/technique barrier that prevents many people from participating in black hat hacking activities.
The development and governance of such tools should be made in a manner that minimizes dual-use and unintended applications.
Biological Risks
- β£οΈ Models that can predict protein structures have the potential to be used to design and synthesize proteins with specific properties, including the ability to target and attack organisms or tissues, enabling the development of harmful biological agents.
The potential abuse of such models highlights the importance of responsible and safe bioethical implementations in AI-assisted biological research.
Impacts on Mental Health
- π Models that generate or facilitate conversation can have negative impacts on mental health. They may harm individuals with psychological disorders or incomplete understanding of the world (e.g., children), who may be more vulnerable to misinformation or superficial or incorrect information. These models can also lead to decreased real-world social interactions and dissatisfaction with human relationships.
In circumstances where human care and human bonds are the basis of its procedures, such tools should not be used or created in a way that can remove the human element.
Environmental Impacts
- π The development of large machine learning models can have significant environmental impacts due to the high energy consumption required for their training. Given the current state of energy mixes in most countries, high energy consumption can contribute to the injection of large amounts of CO2 equivalents into the atmosphere, further pushing the planetary boundaries tied to our current climate crisis.
Sustainable AI design should be a priority for the present and future development of the field.
Surveillance and Social Control
- πΉ AI technologies using computer vision, generative models, speech recognition, or predictive models, depending on their application, can pose a risk to individual notions of privacy, data protection, and civil liberties. These include applications designed for monitoring, surveillance, geolocation, spying, predictive policing, risk assessment algorithms, and sentence recommendation systems.
The application of AI technologies should be assessed by their commitment to upholding and safeguarding civil liberties and fundamental rights.
Bodily Harm
- π AI systems capable of controlling real-world actuators, such as robotic arms, may pose a life-threatening risk to human beings in uncontrolled or offensive scenarios, like model misuse/accidents or combat drones in war zones. Malfunctions or misuse of AI applications in healthcare are also included in this category.
Regulatory frameworks must prevent and mitigate the potentially catastrophic consequences of AI system misuse in situations involving human safety and human lives.
Technological Unemployment
- π· A significant portion of the workforce today still performs many tasks that can be automated using generative models and low-level AI systems. In some industries, such technologies might provoke considerable labor displacement.
It is incumbent upon society to proactively address these challenges by implementing comprehensive retraining and workforce transition programs to ensure equitable economic opportunities and mitigate potential disruptions caused by automation.
Intelectual Fraud
- π¨βπ Generative models can automate the process of academic writing and intellectual creation. Such systems can impact how educational institutions function and how intellectual property laws are designed and implemented.
Educational institutions and policymakers should collaborate to establish regulatory methods that ensure the responsible use of generative models while preserving the integrity of academic and intellectual endeavors.