Artificial Intelligence Principles

Posted by:
Veronica Moran

May 26, 2020

Human interacting with AI in a positive way

Why do we need AI principles?

AI has the potential to have positive, wide ranging impact and improvements on an organisation when applied correctly. However as the pervasiveness, complexity, and scale of these systems grow, any lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is a concern when applying AI agents to undertake decision making or be a part of that process.

The entire industry of AI needs to take the design and application of AI seriously as the new tools and capabilities cut across boundaries and disciplines and should be considered holistically not just technically.

It is therefore recommended that organisations apply a set of over-arching principles when designing, developing and operating AI for use within their organisation.

What are the recommended guiding principles?

 

1. Be socially beneficial.

  • AI systems should be designed with the best interests of our users in mind. They must operate so as to be compatible with principles of human dignity, integrity, freedoms, privacy, cultural and gender diversity, as well as with fundamental human rights.
  • If in the future AI is involved in making automated decisions and processing data through workflows then those decisions should where applicable be reviewed in line with the same policies that govern the organisation’s employees.

 

2. Be trustworthy and fair.

  • AI systems must be transparent, secure, and act consistently. If AI is being used in a system it must be obvious to the users of that system.
  • They must be free of bias that could cause harm—in the digital or physical world, or both—to people and/or the University. In the design and maintenance of AI, it is vital that the system is controlled for negative or harmful human-bias, and that any bias—be it gender, race, sexual orientation, age, etc.—is identified and is not propagated by the system.
  • If AI systems are incorporated into decision making the output of the system should advise a human and not be the only input used in making a decision.

 

3. Be built and tested for safety.

  • AI systems should be secure throughout their operational lifetime and verifiably so where feasible.
  • They need to be highly robust, so that they operate as desired without malfunctioning or getting hacked.

 

4. Be accountable to people.

  • A human ethical governance process must oversee AI systems to ensure they are not displaying bias.
  • There must be a clear escalation and governance process that offers recourse if users are unsatisfied.
  • AI system output should be recorded and audit-able.
  • For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny.

 

The same principles must apply to third party systems that use AI.  If a third party system is being used in an organisation and it uses AI it is incumbent on the organisation to ensure that the AI used also complies with the principles above.


Post navigation

Join the conversation

Your email address will not be published. Required fields are marked *

back to top