In line with the discovery of electricity, nuclear energy and the invention of the steam engine, artificial intelligence (AI) is a technology that in future will affect all parts of society for better or for worse.
Intelligence with an incredible capacity, ubiquitous and available to all, will change the way we work in all types of organisations. From banks to day care centres, the change will be felt in all sectors, and it will happen faster than most people currently think. The change is already underway.
To ensure that society gets the most out of the technology and steer clear of hazards such as unintentional bias in algorithms, manipulative systems and lack of transparency in decision-making, it is necessary to establish guidelines that individuals, businesses and public organisations should follow. Together with a leading group of AI-capable organisations in Denmark, we have co-authored the Danish AI Pledge:
AI-based solutions must be created with the best interest of society in mind
AI-based solutions must not be developed to create addictions, generate conflict, suppression or manipulation of behavior or opinion. AI solutions must be designed and integrated to optimise long-term sustainability. Furthermore, the notions of sustainability should include both economic, social and environmental aspects.
Knowledge about AI should be made available to everyone
Developers, decision makers, authorities, investors, designers and others that work with AI must take responsibility for spreading knowledge about artificial intelligence so that everyone will understand the opportunities and challenges that the technology creates.
AI solutions must be transparent in both function and design
It must be possible for a third party to audit the recommendations and decisions that AI models generate and make. The models must be transparent and it should be documented how the solutions work and generate results.
AI solutions must be tried and tested to withstand systematic and well-informed attacks
As AI solutions spread across all parts of our society managing and optimising a multitude of processes, they will become an increasing target for hackers and other bad actors. Therefore, security around AI solutions must be high and must be considered at the time when the solutions are designed, developed, tested and maintained.
Bias in AI solutions must be documented
The bias contained in AI solutions and the data that models learn from must be documented. This includes biases based on race, gender or socioeconomic status. The use of AI should work to eliminate all known forms of biases.
When AI is used to make decisions in life-and-death situations, the guidelines for these decisions must be discussed and documented in advance
AI solutions should focus on strengthening human judgement and not act on their own behalf when considering questions related to human life and death. This is for example relevant in situation where AI is used in weapons systems or medical services. In instances where the AI-based solutions will have to make a life-or-death decision on its own, for example when being implemented in a self-driving car, the lines along which it makes its decision should be discussed and documented.
When designing and developing AI-based solutions, information asymmetry between the parties involved must be addressed
Companies and public authorities should be extra aware when leveraging AI in situations that are characterised by asymmetric information, meaning situations where one party holds more data and thereby has more knowledge than his or her opponent, as these situations hold an inherently increased risk of misuse of power.