Imagine that you train a neural network to perform a specific task, and you discover it has also learned to do another completely different task, which is very sensitive. Is this possible? What can you do to prevent this?
Let's assume you have a semi-private trained network performing a prediction task of some nature pred1
, ie a network with the first layers encrypted and the last ones visible in clear. The pipeline of the network could be written like this: x -> Private -> output(x) -> Public -> pred1(x)
. pred1(x)
could be the age based on an face picture input x
, or the text traduction of some speech record.