Introduction
Because my project relies on AI I did an ethical analysis to address potential future unwanted effects. In this research I looked at the different safety challenges of AI and potential solutions for these challenges. By looking into these issues I wanted to get a better understanding of how to make my project safer and more ethical.
Abstract
In my analysis I looked at the ethical issues of using AI models (specificaly GPT-4 and DALL-E 3 as they are being used in my project). While AI has many benefits it also has serious problems like safety issues and bias. Understanding these problems is important for me and my internship company.
One big issue is AI hallucinations where the AI generates false or misleading information. This can be dangerous as people might start to trust the AI too much. Another problem is harmful content. Early versions of GPT-4 could generate hateful comments or violent instructions, but the latest version has reduced this risk. Disinformation is also a problem as GPT-4 can create realistic but misleading content, wich makes it a tool people can use for spreading false information. Being too dependant on AI is also a problem as it can lead to a decline in learning and keeping your skills. Lastly, both GPT-4 and DALL-E 3 can have social biases and stereotypes which results in unfair treatment of certain groups.
To address these issues I looked at possible solutions like adding extra rules to the AI’s prompts, using railguard to validate input and output, and having a human review the AI-generated content before it gets published. These things can help make AI safer and fairer.
Conclusion
This analysis helped me understand the problems and potential dangers of using AI in my tool. By knowing these dangers I can better adapt the tool to make it safer and give proper input in my advice report on AI safety.