Kanteron endorses the Universal Guidelines for Artificial Intelligence
The Universal Guidelines for Artificial Intelligence is a document created by the Public Voice coalition, which was established in 1996 by the Electronic Privacy Information Center (EPIC) to promote public participation in decisions concerning the future of the Internet.
See also The Madrid Privacy Declaration, a substantial document that reaffirms international instruments for privacy protection, identifies new challenges, and call for concrete actions.
The Universal Guidelines for Artificial Intelligence full text:
Universal Guidelines for Artificial Intelligence
3 October 2018 - Brussels, Belgium
New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them.
We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.
- Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome.
- Right to Human Determination. All individuals have the right to a final determination made by a person.
- Identification Obligation. The institution responsible for an AI system must be made known to the public.
- Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
- Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.
- Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.
- Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms.
- Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls.
- Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.
- Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.
- Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.
- Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.