Last active
April 13, 2023 00:38
-
-
Save johnmaeda/bf2029c1aa514b2c0cd54573a5b5ccae to your computer and use it in GitHub Desktop.
Initial draft of a Semantic Kernel skill to generate a MSFT HARMS matrix Ref: https://learn.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/harms-modeling/
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The HARMS taxonomy lays out the CATEGORY: TYPE OF HARM, and then gives examples of the kind of harm that can be inflicted. | |
---BEGIN--- | |
## Risk of injury: Physical injury | |
Technology can cause physical harm. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Overreliance on safety features | Dependence on technology without human oversight. | How might people rely on this technology? | Misdiagnosis by healthcare agent. | | |
| Inadequate fail-safes | Insufficient real-world testing. | How would people be impacted if it fails? | Automatic door fails to detect wheelchair. | | |
| Exposure to unhealthy agents | Manufacturing/disposal can jeopardize health. | What negative outcomes could come? | Workers exposed to toxins. | | |
## Risk of injury: Emotional or psychological injury | |
Misused technology can cause emotional distress. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Overreliance on automation | Misguided trust in digital agent. | How could this technology reduce feedback? | Chatbot relied on for mental health counseling. | | |
| Distortion of reality or gaslighting | Misused technology can undermine trust. | Could this modify digital media? | IoT device monitors ex-partner. | | |
| Reduced self-esteem/reputation damage | Harmful, false, misleading content. | How could it misuse information? | Synthetic media "revenge porn". | | |
| Addiction/attention hijacking | Designed for prolonged interaction. | How might it encourage continued interaction? | Video game loot boxes. | | |
| Identity theft | Loss of control over personal credentials. | How might an individual be impersonated? | Synthetic voice font mimics voice. | | |
| Misattribution | Crediting person for wrong action. | How might this attribute an action? | Facial recognition misidentifies individual. | | |
## Denial of consequential services: Opportunity loss | |
Automated decisions could limit access to resources, services, and opportunities essential to well-being. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Employment discrimination | Denial of job access. | Impact on employment. | Hiring AI recommends fewer female names. | | |
| Housing discrimination | Denial of housing access. | Impact on housing. | Longer wait for international-sounding names. | | |
| Insurance and benefit discrimination | Denial of insurance or benefits. | Impact on access, cost, allocation. | Higher rates for night shift drivers. | | |
| Educational discrimination | Denial of education access. | Impact on access, cost, accommodations. | Lower grades for students of color. | | |
| Digital divide/technological discrimination | Disproportionate access to technology. | Prerequisite skills, equipment, connectivity. | Rural students can't access video feeds. | | |
| Loss of choice/network and filter bubble | Information reinforces beliefs. | Impact on choices and information. | News feed confirms existing beliefs. | | |
## Denial of consequential services: Economic loss | |
Automating decisions related to financial instruments, economic opportunity, and resources can amplify existing societal inequities and obstruct well-being. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Credit discrimination | Denial of financial instruments. | Reliance on existing credit. | Higher rates for lower socioeconomic codes. | | |
| Differential pricing of goods and services | Different prices for goods or services. | Impact on pricing. | More charged for gendered products. | | |
| Economic exploitation | Compulsion to work. | Role of human labor. | Paying destitute people for biometric data. | | |
| Devaluation of individual expertise | Supplanting of human expertise. | Impact on workforce. | AI agents replace doctors. | | |
## Infringement on human rights: Dignity loss | |
Technology can interfere with human respect and honor. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Credit discrimination | Denied financial access. | Credit structures/scores. | Higher rates for low-income areas. | | |
| Differential pricing | Different prices for goods/services. | Cost criteria. | Gender-based pricing. | | |
| Economic exploitation | Compelled/misled labor. | Human labor in tech. | Paying poor for biometric data. | | |
| Devaluation of expertise | Replacing human expertise/labor. | Workforce impact. | AI replaces doctors. | | |
## Infringement on human rights: Liberty loss | |
Automating systems can reinforce biases. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Predictive policing | Inferring criminal intent. | Human policing/justice. | Algorithm predicts arrests. | | |
| Social control | Conformity by design. | Personal/behavioral data. | Government "trustworthy" score. | | |
| Loss of effective remedy | Inability to explain/contest. | Understanding/contesting decisions. | Automated prison sentence. | | |
## Infringement on human rights: Privacy loss | |
Technology can reveal private information without consent. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Interference with private life | Revealing unshared information. | How does tech infer private life? | Task-tracking AI infers extramarital affair. | | |
| Forced association | Required tech use for society. | Is tech use required for society? | Biometric enrollment required for job. | | |
| Inability to develop personality | Restricting self-expression. | Does tech ascribe personality traits? | Intelligent meeting system records mentorship sessions. | | |
| Never forgiven | Digital files never deleted. | Where is data stored and accessed? | Teen's social media history remains searchable. | | |
| Loss of freedom of movement | Inability to navigate anonymously. | How does tech monitor people? | Real name required for video game. | | |
## Infringement on human rights: Environmental impact | |
Environment impacted by system or product life cycle. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Exploitation of resources | Negative consequences to environment. | What materials are needed? | Displacement due to rare earth mining. | | |
| Electronic waste | Inability to responsibly dispose electronics. | How does tech reduce e-waste? | Toxic materials in water supply. | | |
| Carbon emissions | Unoptimized cloud solutions. | Are cloud solutions optimized? | Unoptimized cloud solutions harm climate. | | |
## Erosion of social & democratic structures: Manipulation | |
Technology can undermine informed citizenry and societal trust. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Misinformation | Fake information. | Generate or spread misinformation? | Synthetic speech sways election. | | |
| Behavioral exploitation | Exploiting personal preferences. | Observe behavior patterns? | Monitoring shopping habits. | | |
## Erosion of social & democratic structures: Social detriment | |
Technology shapes social and economic structures. | |
| Harm | Description | Consideration(s) | Example | | |
|---|---|---|---| | |
| Amplification of power inequality | Perpetuates disparities. | Used in contexts with disparities? | Requiring address for job website. | | |
| Stereotype reinforcement | Perpetuates stereotypes. | Reinforce social norms? | Image search for "CEO". | | |
| Loss of individuality | Inability to express unique perspective. | Amplify majority opinions? | Limited customization options. | | |
| Loss of representation | Obscures real identities. | Constrain identity options? | Incorrect photo caption. | | |
| Skill degradation and complacency | Overreliance on automation. | Reduce accessibility? | Overreliance on instruments. | | |
---END--- | |
For a proposed service described as: | |
{{$INPUT}} | |
that has been created to be used by: | |
{{$USERS}} | |
by a company with the reputation: | |
{{$REPUTATION}} | |
and assigning a degree of very low, low, medium, high, very high as an estimation of Severity, Scale, Probability, and Frequency with a summarizing score for "HARM POTENTIAL" which depends upon the estimated | |
individual scores for the given service and user base. There is a concise "worse case example" by {{$USERS}} of the proposed service and the "HARM POTENTIAL" for that case where the {{$USERS}} are responsible and the implications are severe. They are written as rows in the HARMS matrix as a markdown table reading like: | |
| CATEGORY | TYPE OF HARM | Severity | Scale | Probability | Frequency | HARM POTENTIAL | Worse Case Example | | |
|---|---|---|---|---|---|---|---| | |
... | |
Each of the columns are left justified using markdown formatting ":---" explicitly. And a short summary of the overall score and justifications are provided. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment