Skip to content

Instantly share code, notes, and snippets.

@rvaneijk
Last active March 21, 2020 15:41
Show Gist options
  • Save rvaneijk/d29cdd1eca1895d324c0ee546111f09d to your computer and use it in GitHub Desktop.
Save rvaneijk/d29cdd1eca1895d324c0ee546111f09d to your computer and use it in GitHub Desktop.
Ethical Algorithm

Q1: COMPLEXITY vs human-in-the-loop (HITL), human-on-the-loop (HOTL), and human-in-command (HIC)?

  • The book states (p. 192) that the solution to the problems introduced by algorithmic decision-making should itself be in large part algorithmic.
  • The reason being that approaches that rely only on human oversight either entail largely giving up on algorithmic decision-making or will necessarily be outmatched by the scale of the problem and hence be insufficient.
  • So the questions becomes:
    • How do we keep the human in the loop?
    • Could it be a mandatory design requirement of being able to explain by a human to a human to what extent they are in-the-loop (HITL), on-the-loop (HOTL), or in-command (HIC)?

Q2: CONTEXT ANALYSIS?

To what extent could algorithmic CONTEXT ANALYSIS help us solving the problem?

Q3: TRUST?

We trust elevators. In fact, testing, safety inspections, certification, and accreditation are key for our trust in technology. Are you a proponent of such an approach for algorithmic decision-making?

Q4: ETHICAL COMPLIANCE vs SPEED OF INNOVATION?

How to ensure ethical compliance, while simultaneously ensuring that data subjects are able to benefit from new technological developments in a reasonably quick manner?

Q5: DECISION SUPPORT vs REPLACEMENT?

This question relates to the requirements of technical robustness and safety and human agency and oversight. An AI-solution can reduce human error, increase diagnostic quality in comparison to a human and increase speed. The AI-solution is positioned as a “decision support system” and not a " decision replacement system". In decition support situations, a subject-matter expert can interpret and confirm the AI result. To what extent should we require AI-solutions NOT to replace human decision?

BACKROUND

How to build a woke AI, by Melissa Heikkilä, 3/17/20 (This article is part of a special report on artificial intelligence, The AI Issue.)

For a technology that increasingly governs human existence, the artificial intelligence in use today is worryingly bigoted. Algorithms have time and again been proven to DISCRIMINATE AGAINST WOMEN AND ETHNIC MINORITIES, from (1) facial recognition tools that misidentify black people, to (2) medical studies that ignore female bodies to (3) recruitment tools that prove to be sexist. As Europe looks to bolster its tech sector, officials want to avoid such pitfalls by fostering AI that is fair and unbiased. In April 2019, the European Commission’s High-Level Expert Group on AI released a set of seven requirements for ethical AI, namely that it should TRANSPARENT, TECHNICALLY ROBUST, ACCOUNTABLE, NONDISCRIMINATORY, PROTECTIVE OF PRIVACY, BE SUBJECT TO HUMAN OVERSIGHT AND IMPROVE SOCIETAL WELLBEING.

Now the challenge is to deliver that in practice.

One oft-cited way to address bias is to hire diverse teams of engineers to develop the underlying technologies. Another is to feed algorithms data that are representative of a society’s diversity — and not skewed toward a particular group. But these fixes are only part of any potential solution. The conversation around ethical AI has “completely missed a trick,” said Alison Powell, a professor at the London School of Economics who is leading a new interdisciplinary research network with the Ada Lovelace Institute called JUST AI. “At the moment, the discussion about ethics is all about an aspirational endpoint,” she added. Creating ethics checklists or principles “might be so high-level and so abstract, that in practice, they don’t mean anything at all, and they don’t change anything at all,” she said.

Difficult choices

According to Powell, A BETTER WAY TO ASSESS AI TOOLS would be to determine in advance whether their use is appropriate, and if the answer is no, not develop them at all. An examination of the use of facial recognition by police, for example, might find that high error rates in identifying minorities outweigh the tech’s benefit. As the head of a separate project named VIRT-EU, Powell aims to help businesses build ethical considerations into their design process. “You have to make difficult decisions,” she said. “And those difficult decisions include refusing things sometimes.” ONE CHALLENGE for businesses is deciding not to develop a certain technology if doing so can shut off potential profits. ANOTHER CHALLENGE has to do with the TRADE-OFF BETWEEN EMBEDDING LEGAL AND ETHICAL PRINCIPLES — LIKE PRIVACY — INTO ALGORITHMS WITH THE RISK THAT DOING SO WILL REDUCE THEIR ACCURACY, according to computer scientists Michael Kearns and Aaron Roth in their book “The Ethical Algorithm: The Science of Socially Aware Algorithm Design.” That, they argue, can sometimes be a price worth paying. Kearns and Roth call for algorithms to be developed with SPECIFIC DEFINITIONS OF THINGS LIKE “PRIVACY” AND “FAIRNESS.” It would then be the job of researchers and developers to not only “identify these constraints and embed them into our algorithms, but also quantify the extent of these trade-offs and to design algorithms that make them as mild as possible,” they write. But even when developers decide to build ethical considerations into their algorithms, they still face a considerable problem: THERE IS NO RELIABLE LEGAL STANDARD OF “FAIRNESS,” OR NON-DISCRIMINATION, IN EU LAW, FOR EXAMPLE. Depending on the local context and legal tradition, definitions of discrimination vary heavily from country to country and judges rely heavily on intuition to make their decisions, according to a study from the Oxford Internet Institute published earlier this month. The trouble for AI developers is that automated systems reflect the humans that create them, including their biases, conscious or not. For example, a recent U.N. study found that almost 90 percent of men and women are biased against women. “We are building things and introducing things into a society with a set of values that we bring along with us,” Powell said. To build a woke AI, we’ll first have to build a woke society.

FATML PRINCIPLES

  • RESPONSIBILITY: Make available externally visible avenues of redress for adverse individual or societal effects of an algorithmic decision system, and designate an internal role for the person who is responsible for the timely remedy of such issues.
  • EXPLAINABILITY: Ensure that algorithmic decisions as well as any data driving those decisions can be explained to end-users and other stakeholders in non-technical terms. Accuracy: Identify, log, and articulate sources of error and uncertainty throughout the algorithm and its data sources so that expected and worst case implications can be understood and inform mitigation procedures.
  • AUDITABILITY: Enable interested third parties to probe, understand, and review the behavior of the algorithm through disclosure of information that enables monitoring, checking, or criticism, including through provision of detailed documentation, technically suitable APIs, and permissive terms of use.
  • FAIRNESS: Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g. race, sex, etc).

EUROPEAN COMMISSION HIGH-LEVEL GROUP

  • ON TRANSPARENCY
    • The data, system and AI business models should be transparent.
    • Traceability mechanisms can help achieving this.
    • Moreover, AI systems and their decisions, should be explained in a manner adapted to the stakeholder concerned.
    • Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • ON TECHNICAL ROBUSTNESS AND SAFETY
    • AI systems need to be resilient and secure.
    • They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible.
    • That is the only way to ensure that unintentional harm can be mitigated and prevented.
  • ON ACCOUNTABILITY
    • Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
    • Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications.
    • Moreover, adequate and accessible redress should be ensured.
  • ON DIVERSITY, NON-DISCRIMINATION AND FAIRNESS
    • Unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups, to the exacerbation of prejudice and discrimination.
    • Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • ON PRIVACY AND DATA GOVERNANCE
    • Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • ON HUMAN AGENCY AND OVERSIGHT
    • AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights.
    • At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop (HITL), human-on-the-loop (HOTL), and human-in-command (HIC) approaches.
    • Respondents were asked to give scores on three elements of human agency and oversight as formulated by the High-Level Group:
    • Fundamental rights: AI systems could negatively affect fundamental rights. In situations where such risk exists, a fundamental rights impact assessment should be undertaken and include an evaluation of whether those risks can be reduced or justified as necessary in a democratic society, in order to respect the rights and freedoms of others.
    • Human agency: AI systems can sometimes be deployed to shape and influence human behaviour through mechanisms that may be difficult to detect, which may threaten individual autonomy.
    • The overall principle of user autonomy must be central to the system’s functionality.
    • Human oversight: human oversight helps to ensure that an AI system does not undermine human autonomy or cause other adverse effects.
    • When using AI, it must be ensured that public enforcers have the ability to exercise oversight in line with their mandate.
    • Oversight may be achieved through governance mechanisms such as HITL, HOTL, or HIC.
  • ON SOCIETAL AND ENVIRONMENTAL WELL-BEING
    • AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly.
    • Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment