The collaborators from the University of Warwick, Imperial College London, and EPFL – Lausanne, along with the strategy firm Sciteb Ltd, believe that in an environment in which decisions are increasingly made without human intervention, there is a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy—and to find and reduce that risk, or eliminate entirely, if possible.
Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be more profitable to make certain decisions that end up hurting the company.
The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential penalty if regulators levy hefty fines or customers boycott you – or both.
That's why these mathematicians and statisticians came together: to help business and regulators by creating a new "Unethical Optimization Principle" that would provide a simple formula to estimate the impact of AI decisions.