General Guidance on the Safe use of AI

1.0 Introduction and scope

AU recognises the importance of using generative AI in education and is committed to its responsible, ethical and appropriate use in teaching and learning, research, and administration.

The ethical use of AI tools can provide significant benefits and their use will undoubtedly become more commonplace in the next few years. However, there are some clear risks and security concerns in employing generative AI.  The following guidance highlights these threats and is intended as a high-level guide to best practice when using, primarily, generative AI tools across a range of activities, such as academic research and administration within a higher education environment.

 It should also be noted that existing policies and procedures will apply to some of the situations described below. All AU IT systems users should familiarise themselves with this document and recognise that they are ultimately responsible for:

  • The output of any AI tool they seek to make use of;
  • Ensuring the quality of their own work in relation to AI use;
  • Educating themselves about how that AI tool is trained and the appropriateness of their choice of AI tool.

 

2.0 Teaching and learning

For use of AI specifically in teaching and learning contexts, see:

Guide to best practice

Utilising AI in the Library

Where use of AI constitutes Unacceptable Academic Practice

LTEU Blog including AI updates

 

3.0 Risks to confidentiality

AI tools are trained on, and capture, vast amounts of pre-existing data and any data which you may have provided when using the tool. If you share any personal or sensitive (‘special category’) data with any generative AI platform, it may be accessible by the service provider, its partners and sub-contractors and also has the potential to become publicly available.  You may well be breaching data protection legislation in this or other ways. Do not share such data with AI tools, particularly when third party AI generative services are being used.  See 5.6 below for further details.

The University’s policies and procedures on data protection will apply in these cases

 

4.0 Copyright / Intellectual Property / confidential business data

The University holds and processes a great deal of information which is business critical, commercially valuable, and highly sensitive.  In some cases, it comprises the intellectual property of the University, and such categories of data or documentation should not be supplied to generative AI tools.  Similarly, textual works may be copyright of the university or of its staff and similarly should not be input into third party generative AI tools.  You should also respect the copyright of third parties, such as authors whose works are held by the library or are part of online subscriptions available to AU staff and students.  Legal responsibility for copyright infringement is likely to lie with the user, not the generative AI tool.

The following policies apply here:

Copyright 

Intellectual Property Policy

 

5.0 Good practice when using AI

5.1 Use of Copilot and  ‘third party’  tools

AU has access to Microsoft’s Copilot which is safe and fully licensed for use.  This should be the primary option for those wishing to use AI for University business. 

Other third-party AI tools need to be approved for AU use using the established security clearance procedures set out in the Cloud Service Security Policy

 

5.2 Transparency and responsibility

Where at all possible, the use of AI should be transparent and explainable. Be clear when you have used AI and the reasons for its use whether in developing teaching and assessment activities, research or administration. This can help to promote academic integrity and prevent any misunderstandings or ethical issues.

Individuals and groups using AI technologies are responsible for their actions and must use AI in a responsible and ethical manner.

 

5.3 Read the ‘Terms and Conditions’

Always familiarise yourself with the Terms and Conditions of any AI tool in order to understand what it does with submitted data, conditions of usage and ownership, the limitations of the tool, potential biases, and any options which are offered to you (such as disallowing the retention of data).

 

5.4 Reporting

Be prepared to report any misuse of AI technology or any results from AI tools which might breach University rules or UK legislation.

 

5.5 Broader ethical issues

Be aware that there may be wider ethical implications of using certain AI tools.  Some are highlighted below.

  • Potential inherent biases due to the training data/models used, including biases on gender, race, religion, sexuality, disability, and geo-politics, among others. 
  • AI which is trained on copyrighted images, texts, music and code, which are then remixed/reproduced without compensation to the original creator.
  • Use of AI trained on non-profit resources used for commercial output.
  • Data labelling in poorer countries done by underpaid workers.
  • Environmental impact of AI data centres using significant amounts of power, based in areas where there is little green energy.

 

5.6 Avoid using certain data

Some organisations prohibit the use of the following data categories in any AI processes:

  1. Passwords and usernames.
  2. Personally identifiable information (PII) or other sensitive or confidential material, irrespective of any perceived lawful basis, including explicit consent. PII is any information that can be used to confirm or corroborate a person’s identity.
  3. Any data that hasn't been properly anonymised to ensure it is non-identifiable.
  4. Any data that is not fully consistent with the University’s policies on Data Protection, Data Processing, GDPR/DPA2018, Academic Integrity, Attribution and Ethics.
  5. Any data related to University Intellectual Property.
  6. Any data that is protected by copyright, unless it aligns with principles like fair use, educational exceptions, or if explicit permission has been granted.
  7. Any prompts or data, whose responses might result in reputational damage to Aberystwyth University
  8. Any non-PII data from third parties where the individual has not explicitly consented for their data to be used with AI, with the exception of data that is clearly already in the public domain.

 

Should you wish to use any of the above data in conjunction with AI, you must contact Information Services for advice beforehand.

 

Approved Aug 2024.  Due for review Sept 2025