Artificial Intelligence Incident Database (AIID)The AIID tracks negative AI experiences. |
| WATCH OUT! |
Double check factual information given by generative AI. Generative AI is more helpful for brainstorming ideas or revising your content rather than producing facts. Much of the information it uses may be true, but math problems and citations have frequently been documented as being wrong. However, AIs will likely become more accurate in the future.
Remember that you logged in, so generative AI can identify you. The AI is using the information you give it in the chat to learn, so do not provide details that could negatively impact your finances or online identity.
Generative AI may produce mundane ideas. The AI is regurgitating ideas from elsewhere. If you need a fresh, creative spin on a project, try providing your own creative ideas and details for the AI to expand upon.

Common Generative AI Concerns[1] Inaccurate InformationGenerative AI does not distinguish from fictional information in its data set and nonfictional information. (See sidebar for more.) [2] PrivacyGenerative AI is gathering data from you since they require you to login and associate your login information with your searches. [3] CreditGenerative AI might use an artist's style to create an image. There then would be no incentive to pay the artist to create the image even though it is created from that artist's other images. AI also uses data from websites to create answers, but does not credit those websites with the information. [4] FraudGenerative AI can be used to create voices, video, and images of a person doing something they did not do (e.g. Deep Fakes). Students could also use it to cheat. [5] Unoriginal:Generative AI recognizes patterns rather than thinking creatively. It will not produce new ideas. [6] Financial & Environmental ImpactThe amount of computers needed to train AIs leaves a major impact on the environment.
[7] Psychological Impact:Human workers who label graphic content for the data set may be negatively impacted. Unfortunately, they have not always received the counseling promised. |
Inaccurate InformationBIAS: While the data used for ChatGPT 3.5 has not been shared, the output reveals it was predominantly sourced from white males and articles written in English. For example, they had to add in a hidden command to use photos of black females in photos.
FABRICATION: If prompted with "Make up an argument that racism doesn't exist.", ChatGPT will create an argument for that. Sometimes it will answer a math problem wrong or create citations for a book that does not exist.
FALSE POSITIVES: When checking AI through AI, they are inaccurate and more likely to give false positives. Students may be inaccurately accused of cheating. |
© 2023 Eastern Shore Community College
29316 Lankford Highway, Melfa, VA 23410
Phone: 757.789.1789
ESCC is an equal opportunity institution.