Guidance on Data Stewardship and Artificial Intelligence

ChatGPT and similar Artificial Intelligence (AI) technology/programs have leapfrogged into the forefront of productivity tools. The College recognizes the opportunities these tools create for staff and back office functions, as well as the risks. It is expected that offices have many creative opportunities to use AI to improve operations and service to students. This temporary guidance outlines appropriate use and acquisition of AI technology for staff and operational functions at the College. It is not intended to address pedagogical factors or academic integrity, although some of this content may be informative to such purposes.

1. No Sensitive Data in Public Engines. Users of ChatGPT and similar artificial intelligence (AI) technology must avoid integrating, entering, or otherwise incorporating any non-public institutional data into public engines. The use of institutional data with these services may result in unauthorized disclosure to the public and may expose the user and the College to legal liability and other consequences.

Once data or information is shared, it becomes potentially accessible by others and cannot be controlled or retrieved. As the College has no contracts to protect sensitive data with the open models at this time, we are unable to use them with sensitive data.

In time, we expect the College to obtain AI capabilities for use with non-public data. If you have pressing use cases, please reach out to Library & Information Services to discuss the future opportunities.

2. Contracts for AI must be Coordinated through Library & Information Services. Individual employees and units are not authorized to enter into technology agreements, absent an IT Security Review and prior written authorization from Library and Information Systems. Additionally, Carthage’s Security Terms and Conditions must be incorporated into the contract during the acquisition process. This policy applies to all technology, including AI. The acquisition steps managed by LIS enable the College to stay compliant with cybersecurity law and optimize our technology purchases.

3. Use of AI must appropriately address ethical and legal considerations. Use of AI must be in line with legal and regulatory requirements (i.e. anti-discrimination law, FERPA, GLBA), and with ethical considerations. AI introduces some new ethical considerations. Below is a list of key ethical considerations.

Ethical Consideration Description
Fairness and Bias AI systems should be designed and trained to ensure fairness and avoid bias in their decision-making processes. Developers and prompt authors need to be aware of potential biases in training data and algorithms, and take steps to mitigate them.
Transparency and Explainability AI systems should be transparent, and their decision-making processes should be explainable to users. The lack of transparency can lead to distrust and raise concerns about accountability.
Privacy The collection and use of personal data by AI systems should respect privacy rights and adhere to relevant regulations. Informed consent should be obtained for significant uses where there are impacted individuals.

Some ways AI systems can be implemented to meet these ethical and legal requirements include:

  1. Ensure human judgment is part of the process for final decision-making, so the AI engine is a tool to assist the staff member, not the decision-maker
  2. Engage in thorough testing to minimize errors and unintended consequences
  3. Use interpretable machine learning models that provide insights into how they arrive at their decisions. Models such as decision trees, rule-based systems, or linear models are generally more transparent than complex black-box models like deep neural networks. Open models that are able to cite sources provide better traceability for research and content development.
  4. Establish an independent auditing process to assess the fairness, bias, and transparency of the model and its responses. Auditing processes can range from a simple manual random sampling to an automated redundant AI auditing model
  5. NEVER put personally-identifiable information into a public AI engine. Consider opportunities to utilize the technology with anonymized data.

When Can I Use a Public AI Engine? Following are some examples for using a public AI engine (such as ChatGPT, Bing ai, Dali image generator, etc.) If in doubt, please consult with your supervisor or an LIS Manager. There may be opportunities to anonymize your prompt in order to use these tools.

Probably OK
  • To write a user procedure document
  • To enhance an image for the public web site
  • To look for trends or anomalies in anonymized data
  • To edit a non-sensitive letter to parents
  • To generate pros and cons for a hypothetical situation
Not OK
  • To analyze data containing student names
  • To draft a letter to students announcing something not yet public