With the emergence of various Generative Artificial Intelligence tools (e.g., ChatGPT, Google Gemini, DALL-E, Microsoft CoPilot, Adobe Acrobat AI Assistant, and others), members of the campus community are eager to explore their use in the university context. This advisory guides how to use these tools to support innovation without putting institutional, personal, or proprietary information at risk.
These guidelines also cover permissible use cases of AI in the operations, research, and teaching missions of UNCG. This is intended to be a living document that will evolve over time as the technology and common uses evolve. In all cases, AI use should be consistent with UNCG Responsible AI Principles.
Free Generative AI Tools: Approved for use with Level 1 data classification.
Microsoft Free CoPilot integrated within the Windows 11 OS : Approved for use with Level 1 and Level 2 data classification.
Microsoft Enterprise CoPilot for Microsoft 365: Approved for use with Level 1 and Level 2 data classification.
Claude Pro Plan: Approved for use with Level 1 and Level 2 data classification.
Adobe Firefly: Approved for use with Level 1 and Level 2 data classification.
The purchase or use of any large language model (LLM) or AI-powered chatbot, such as ChatGPT, Claude, Gemini, or similar tools, must include explicit data privacy guarantees. These guarantees should ensure that user prompts, conversations, and interactions are not used to train or improve the underlying models. All AI tools must offer robust data privacy features that protect user information, ensuring confidentiality and trust. This standard is designed to provide individual users and organizations with enhanced privacy and functionality, enabling them to confidently use AI tools while safeguarding their sensitive data.
The acquisition and use of AI-enabled products or services must complete a UNCG ITS Pre-Purchase Software and Hardware Review. When applicable, a full Security Posture & Risk Assessment will be conducted alongside this process and will be repeated on an ongoing basis. The Pre-Purchase Software and Hardware Review adheres to the Information Technology Procurement policy located in the UNCG Policy Manual.
Completion of academic work in a manner not allowed by the instructor.
Unless specifically stated in the “Allowable Use” section above, no personal, confidential, proprietary, or otherwise sensitive information may be entered into or generated as output from models or prompts. Student records subject to FERPA, and any other information Level 2, Level 3, or Level P4 data classification should not be used without written review and approval by the University Procurement Office and the Division of Information Technology Services (ITS). This includes:
Creation of non-public instructional materials
Proprietary or unpublished research
Some generative AI tools, such as OpenAI, explicitly forbid their use for certain categories of activity, including harassment, discrimination, and other illegal activities. An example of this can be found in found in OpenAI’s usage policy document.
New Uses of AI: If you are considering a new use of Generative AI in your studies or work, it is your responsibility to consider the ethics and risks involved and obtain approval from your instructor or responsible unit head. Be sure to visit AI Resources or AI Training, for supportive education and training.
Use of AI that involves highly-consequential automated decision-making requires extreme caution and should not be employed without prior consultation with appropriate campus entities, including the responsible Unit head, as such use could put the University and individuals at significant risk.
Examples include, but are not limited to:
Legal analysis or advice
Recruitment, personnel, or disciplinary decision-making
Seeking to replace work currently done by represented employees
Security tools using facial recognition
Grading or assessment of student work
Personal Liability: Please note that certain generative AI tools use click-through agreements. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions.
For questions regarding privacy with AI tools, contact UNCG ITS 6-TECH. For questions regarding appropriate use of AI tools, the 15 Questions to Ask When Evaluating an AI Vendor, Faculty and Staff Resources is a helpful starting point.
Limitations on Data provided to AI Tools
Be careful about the type of information and data provided to AI systems as prompts or for analysis. This is especially true for any personally identifiable information. Some of these systems do not provide adequate privacy protection for nonpublic data or intellectual property. At present, any use of ChatGPT or similar AI tools should be with the assumption that no personal, confidential, proprietary, or otherwise sensitive information may be used with it. In general, student records subject to FERPA, and any Level 2, Level 3, or Level 4 data classification should not be used.
NEVER share your Banner ID or username and password with AI tools, and always be aware of phishing schemes. Remember, depending on the AI tool, the information you enter could become public. If you are not sure about a tool’s privacy and security, you should assume that the information you enter will become public. This is true for the output that AI tools generate, as well.
If there is any doubt about the security or privacy of an AI tool, or you would like to use any nonpublic data with an AI tool, please contact UNCG ITS 6-TECH.
Use Cases
The section that follows categorizes the associated risks of using AI tools with specific use cases. If you have questions about a use case that is not described below, please contact UNCG ITS 6-TECH prior to using an AI system.
Low Risk: includes use cases that are unlikely to cause significant issues and can be considered safe for most applications, provided the use case meets the general guidelines and is compliant with University policies.
Medium risk: includes use cases that could present challenges or require careful management to mitigate potential problems. Many of these are situationally dependent and may be considered an acceptable or unacceptable use depending on the details. Data and intellectual property protection are especially important to consider in these cases, along with general guidelines and University policies.
High Risk: includes use cases that could lead to serious legal, compliance, or ethical complications or require substantial oversight and precautions.
General Use Cases
(Low Risk) – In general, content generation is a low to medium-risk use of generative AI tools. The initial draft can be generated by AI to help save time, just be sure to review the final output to make sure that it is accurate and appropriate before you use it. As with anything AI, double and triple-checking facts is critical. In addition, you should avoid submitting nonpublic data, e.g. personal identifying information or student information protected by FERPA, into the AI system unless you are certain about its data security and privacy policies.
(Low Risk) – In general, this is an effective and safe use of generative AI tools. It is important to review the revision to make sure the AI did not change the meaning of what you were trying to say, and to follow the University’s general guidelines and applicable policies. When entering text for this purpose, remove any identifying information or details that would not otherwise be publicly available.
(Low Risk) – In general, this is a productive use of generative AI tools. Just make sure to follow the University’s general guidelines and applicable policies.
(Low Risk)– Depending on the situation’s specific details and the document’s privacy requirements, this can be a beneficial use case for AI. This can be helpful when used to gain a general introduction to a text or subject area. Please be mindful that AI-generated summaries may not be comprehensive or fully accurate. Therefore, University users are strongly encouraged to refer to the primary documents before making any critical decisions related to the University. In addition, primary documents put into an AI tool in order to create a summary will no longer be confidential (e.g. pre-print research).
(Low Risk), (Medium Risk), (High Risk) – Unless you are analyzing publicly available data, you should use extreme caution when using AI tools to analyze data. The permissibility of this use depends on data classification and the privacy protection offered by the system being used to analyze the data. As a general rule, protected data should not be entered into an AI tool unless you are certain that there is an agreement in place to protect the University’s institutional data. Even when the AI system offers privacy protection, all identifying information should be removed from a dataset before it is uploaded into any AI data analysis tool (contact UNCG ITS 6-TECH for guidance on de-identification). Similar considerations apply when non-public output is expected to be generated, even if the data being analyzed is publicly available.
(Low Risk), (Medium Risk), (High Risk) – This depends on data classification and the privacy protection offered by the system being used to analyze the data. As a general rule, protected data should not be entered into an AI tool unless there is a UC/campus agreement allowing use for that protection level. For more information about approved tools, see the links here: https://software.uncg.edu/.
(High Risk) – There are many complications regarding consent, privacy and confidentiality when recording, transcribing and summarizing meetings using any AI tool (for example, OtterAI, ReadAI, etc.). The campus does not offer an enterprise solution at this time and prohibits the use of free AI tools to record, transcribe and/or summarize meetings.
(Low Risk) – This can be a good use of AI. Here are some sources for advice on how to use AI as a tool to help you prepare for interviewing.
Administrative Use Cases
(High Risk) – This is not an appropriate use of AI. AI can introduce bias and be factually incorrect so it should not be used to review vendor proposals in the competitive bid process. Additionally, there may be proprietary/protected information in RFP responses that should not be entered into AI systems.
(Medium Risk) – This is not recommended as it doesn’t play to the strengths of current AI systems. Any market research performed by AI should be rigorously fact-checked. AI might be used as a complimentary set of information, but anything AI generated should be verified by human analysis.
(High Risk) – AI systems are known to demonstrate bias, so reviewing student admission applications is not an appropriate use of AI.
Human Resources Use Cases
(Medium Risk) – AI can be helpful in the writing process, but in this case, all of the ideas and concepts should come from the supervisor who is responsible for the performance appraisal. AI can be used to help summarize multiple sources of input and data; this helps formulate thoughts without duplications. AI can, with the proper prompts, aid in providing language that is more constructive or that provides clearer guidance. It can also help with creating personalized development plans based on content in the narrative of the performance review. Using key points from bulleted lists or notes, AI can streamline a narrative by providing clear, concise feedback, action items, and goals. Again, this is based on the supervisor’s findings and evaluation and should NOT be generated by AI. When utilizing AI for performance reviews, confidential, organizational, or personally identifiable information should not be entered into the AI tool. Be aware of the potential for AI to produce content that may introduce bias or produce inaccurate or false information. It is critical to keep the human in the loop and all material should be reviewed and verified by the supervisor authoring the review.
(High Risk) – Not at this time. Current AI systems are known to demonstrate bias, so reviewing job applicants’ resumes is not an appropriate use of AI. This process should be done by humans (for example, the recruiter, hiring manager, hiring committee). AI might inadvertently discriminate based on protected characteristics given the AI tool’s source data. The Equal Employment Opportunity Commission expects employers who use AI to take reasonable measures to test the algorithm’s functionality in real-world scenarios to ensure the results are not biased.
(Medium Risk) – AI can be used to help articulate the language needed to make your job description more concise and inclusive, keeping in mind that some AI systems show bias that would be especially inappropriate in the recruitment process. All job descriptions should be in accordance with the University’s job description requirements.
(Medium Risk) – While generative AI can be a good place to start with generating outlines and starting text for policies and procedures, all output – just like with non-AI output, must be submitted for review by the appropriate committee/governance structure before enactment.
(Low Risk) – This can be a good use of AI. GenAI software that observes a task being completed and then develops training content for others to learn to complete the same task (ex: https://www.tango.us/). You should not use “live/real data” in the session used to develop the training aid.
Technical Use Cases
(Medium Risk) – Some standard portions of routine code blocks can be written using AI as there are not unique techniques in these smaller code blocks, but it should not be used to write entire programs. As always, use caution when including these programming blocks and test thoroughly.
(Medium Risk) – The increased “intellisense” functionality of tools like GitHub Copilot can enable AI to generate the “routine” or boilerplate code blocks for basic functions more quickly as it predicts what you need to accomplish in code. While AI automates certain tasks, it cannot replace human creativity, intuition, or problem-solving abilities. Always use caution when including these programming blocks and test thoroughly
(Medium Risk) – This can be an effective way to more efficiently manage large amounts of Stack Overflow posts but be careful when writing your prompt to generate the desired search results and always verify code before putting it to use on university systems.
(Low Risk) – AI-driven testing tools can execute test cases, identify defects, and validate software functionality. They can accelerate testing cycles and improve software reliability. Human involvement and review are critical to review the testing parameters.