Artificial intelligence programs will process and store data entered into them. While every platform's policy on data retention and usage will vary, many public AI platforms will use this data to train the AI models. Because of this, your data may be used to influence the output of AI models, meaning that sensitive or proprietary information could be exposed to unauthorized individuals when the model answers their questions. If this information includes protected student or employee information (such as financial or health details), this could lead to legal repercussions.
Before you make use of any AI tools, it is best to consult your department's administrative policies on AI usage. It is equally important to determine which AI models may be used, as every AI model (ie. OpenAI's ChatGPT, Microsoft Copilot, Google Gemini) will be different in its handling of information and capabilities.
Students should remember that utilizing generative AI on coursework is prohibited, unless explicitly approved by their instructors. Using generative AI to complete coursework can be considered both cheating and plagiarism, depending on the context in which the AI model was used. The university's policy regarding student cheating and plagiarism (Policy 3-01.8) contains more information.