Skip to content

Instantly share code, notes, and snippets.

@grimmo
Forked from michele-carignani/ai-like-medicines.md
Created October 22, 2018 08:35
Show Gist options
  • Save grimmo/12ee3c0383562a8fd4befe78f43b52bd to your computer and use it in GitHub Desktop.
Save grimmo/12ee3c0383562a8fd4befe78f43b52bd to your computer and use it in GitHub Desktop.
AI like medicines

AI like medicines

The original article is published (in italian) at https://blog.quintarelli.it/2018/10/ai-le-medicine.html. The author (and subsiquent rights) of the article is Stefano Quintarelli.

Think about medicines.

We know that in some cases they may cause victims. In some cases and if used inappropriately. But they normally work. They can have side effects and even cause deaths.

We have a regulatory infrastructure that has allowed the development of the pharmaceutical industry. We asked pharmaceutical companies to

  • declare what the medicines are for
  • declare how they should be used
  • declare any negative side effects
  • run tests with animals and, if successful
  • require authorization for tests with humans (with an authorization and control body)
  • perform various tests on humans
  • monitor the subsequent use
  • in some cases (riskier) trace each phase of use
  • notify possible problems
  • release the IPR after a certain number of years
  • withdraw them immediately from the market in case of problems.

And we have regulations for all this. Let us now consider a deadly pathology: if you do nothing, you have 10,000 victims if you provide a medicine, you have 100 victims.

It's a wonderful gain for society.

Now we come to AI.

Think about driving by car:

You're the CEO of a car company. // todo Give your products to human users knowing that 10,000 of them will die for a variety of reasons. The products are ok, the problem is human driving. It is not your responsibility, it is the user's.

Then you introduce a driving technology and there are only 100 victims. It's a wonderful gain for society.

But the families of the victims will sue you.

You are responsible for the product. You may have been prepared and you may prevent the company from going bankrupt due to damage and liability but you may face prison in various jurisdictions for putting on the market a product that caused a death, at least in unintentional terms.

In the pharmaceutical example, at the regulatory level, we consider the overall output for benefits and responsibilities; in other products we look at the responsibility for each individual product for liability.

IMHO we should build a regulatory infrastructure for artificial intelligence like the one we have for the pharmaceutical industry. (With all the steps and restrictions mentioned above, perhaps borrowing from the regulation of medicines).

The responsibility of companies should be assessed not for the single incident but for the overall effect they have, asking them to take appropriate tests, to declare what they optimize and to hold them accountable for it.

I know that some friends will wrinkle their noses, that an approach that provides a sort of "algorithmic explicability" would be preferable in principle and is a line that I share. Up to a certain point. It can lead to costs that would actually prevent the beneficial use of a system.

As known, I am not a faithful "singularitarian", but I think that, as the technology goes forward, the cost of explicability will increase exponentially anyway and could deprive us of benefits.

I also think that this interpretation is inevitable and that we will come to it, so I think it would be better to anticipate it and to try to make it a competitive advantage for the country. This is the meaning of the bill[1] that I presented a year ago.

[1] (Italian) http://www.camera.it/leg17/126?tab=&leg=17&idDocumento=4793&sede=&tipo=

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment