Redefine algorithms: from Statistics to Engineering
Algorithms relate to data whereas for the human body lungs relate to oxygen; they are intertwined on multiple levels. Parallel to the early stage of (big) data, algorithms also benefited from quickly diminishing (technical) limitations. The combination of virtually non-limited data-handling capacity and the explosive growth in data-availability gave algorithms the upward energy to become the supposedly leading driver of innovation in the 2010’s.
However, whereas 99% of all global data today is new and only became available in recent years, algorithms have a much longer history. Since algorithms are basically statistics they go back to the early days when math pioneers started working on equations to optimize input data for a certain outcome. Given this historical background and our familiarity with math we got used to the idea that today’s algorithms are ‘just advanced statistics’.
Even though not untrue in its purest form, we must immediately stop treating algorithms as if they are ‘just advanced statistics’. It gives algorithms an aura of innocence and nerdy-ness whereas in today’s data-driven reality they are anything but.
‘Algorithms can result in mind blowing new insights and breakthroughs for almost any market. But at the same time they can literally ruin a person’s life or cause severe damage to companies or democratic institutions.’
Algorithms require engineering
Redefining algorithms begins by admitting they are in fact impact-creating machines that need to be engineered in great detail and in accordance with the context in which they are embedded. Such a more integrated approach means all elements that influence an algorithm’s impact must be taken into account, not just the ones that influence its technical performance from a statistics or math perspective. Only when technical performance is part of an integrated solution, algorithms become trustworthy and meaningful.
‘Just like the engine of a car or airplane an algorithm should be engineered as part of an integrated solution. Not as a stand-alone mathematical equation.’
Engineering algorithms from both an impact perspective as well as its larger business context means that living up to even the highest technical performance standards is not enough. Trust enhancing algorithms must also ensure their outcome and impact are aligned with relevant legal frameworks and with generally accepted human or social standards in society. Let’s explore these three conditions for impact-oriented algorithms in more detail.
Next level technical performance
In the 2020’s we must bring the technical performance of an algorithm to the next level. Current state-of-the-art algorithms, based upon predominantly associations and correlations, are great in automating certain tasks. However, for engineering impact-oriented algorithms we demand more and better. Their technical performance can only be perfect to the extent that an algorithm has impact in its larger business context.
Our point of reference is the way new products or services are designed, produced and used in for instance pharmaceuticals, transportation or consumer electronics. Best practices in these markets reveal almost scientific methods of integrated engineering. Peer reviewing by specialists from outside, extensive usage trials in real world settings, testing based on different data sets, periodic updates to avoid self-fulfilling data bias and open innovation agenda’s to address current technical or industry pitfalls are just some examples.
To fight the trust crisis we must apply these methods on how we engineer an impact-oriented algorithm. This will not only improve its technical performance, but also the extent to which it has impact and can be embedded within a specific business context.
‘For the technical performance of impact oriented algorithms accountability and reliability are trust-enforcing assets that take time and require dedicated financial investments.’
Trustworthy algorithms simply don’t go along with facilitating discrimination, enabling false competition, neglecting privacy or violating ownership and IP related rights. In the 2020’s we therefore assure that an algorithm’s outcome and impact are compliant with relevant rules and regulations. Again, think about (new) medication, cars or your latest phone. Their producers incorporate relevant parts of legislative systems throughout the entire design-produce-use cycle. For high impact algorithms we demand nothing less.
‘Engineering algorithms in accordance with relevant regulatory criteria enlarges the extent to which we can trust their performance and accept the impact they have.’
However, sometimes this is easier said than done. In fact, there are a growing number of cases in which we sadly have to acknowledge that existing regulatory frameworks are (quickly becoming) out-dated. In these situations taking legal shortcuts is a no go. Even though it might be tempting to boost an algorithm’s performance this is unacceptable. Building trust implies to behave in accordance with legal principles even when they are insufficient and/or non-existent.
Engineering in accordance with what we consider to be acceptable human or social (behaviour) standards in society is the third way to make impact oriented algorithms more trustworthy. Integrated algorithms should be designed and applied according to the same criteria and have the same social obligations that apply to us, humans of flesh and blood. Only when we accept our human responsibility we can assure their outcome and impact are socially acceptable.
‘To make algorithms more trustworthy the basic idea is to recognize they are instruments that are engineered by humans who, in doing so, apply human values and use human intelligence.’
So when almost every organization today, start-up or corporate / private or public, claims to be socially responsible and driven by values and purpose, this should also apply to how the algorithms within a company’s corporate digital twin are designed-build-used. How does it work? First we need to be open and explicit about the impact objectives and the values underlying human behavioural rules that are translated into code. Second we must be open and responsive about how an algorithm’s performance is translated to outcome and impact.
‘If we can’t understand or explain how an algorithm works, we shouldn’t use it in situations where it can have serious impact on our personal live or society at large.’
But saying good-bye to the out-dated black box principle and having algorithms audited by an external authority or expert court is not enough. Behaving in accordance with accepted human or social standards also includes investing in education and training programs that intend to increase the general knowledge on algorithms. As we have only just started to use high impact algorithms we should be honest about their (current) limitations and required next step investments.