When the Government of the United Kingdom announced ambitious plans for digital identity last year, it signified a transition that has countries around the world adapting to an increasingly digital and global society. By 2018, The World Bank was reporting that over 80 percent of countries with a national identity program have a digital element: Digital identity as a foundational public service has evolved well beyond the initial phases where trailblazing examples such as Estonia and India were pioneering new possibilities. Initially, driven by the requirements of private enterprise looking to manage rights and access to systems, storefronts and platforms, digital identity solutions are now serving as a function of national infrastructure. The implications are significant.
The goals and scope of systems serving public sector ambitions are broad: They facilitate services and social benefits, citizens’ rights and obligations. They also underpin economic development, improved social inclusion and prosperity. As such, the cost of a systems’ failure or misuse are growing, alongside the incentives to commit fraud, breach or manipulate these systems in a way that could lead to that failure.
Understanding the risks
There are many recognized risks with data-oriented systems. Those specific to national identity systems, however, have yet to be well defined. Systems must not only live up to requirements for resilience but must also motivate wide-spread adoption to achieve their aims, and most importantly should not introduce risks or limit the individuals and communities that they are being set up to serve. It follows that an emphasis on understanding the social impact of the transition to digital technologies should inspire the design of the systems that go into public service.
Such an emphasis speaks to the varied economic and social differences in which digital identity systems are now being deployed. There is growing scrutiny over the differing and evolving cultural attitudes toward privacy, for example, and many ethical debates around the technical possibilities for the management and use of identity data, the linking of records and analysis, and the potential to exacerbate inequalities or introduce complexities that present barriers for people.
In the last 12 months, Covid-19 has both demonstrated and intensified the global reliance on digital identity systems to underpin economies and social support. It has also increasingly shaped the public conversation. As testing, contact tracing and now Covid-19 vaccinations continue to play a significant role in managing the pandemic, they become a growing source of sensitive data that has the potential to be linked with an individual. How such data is managed, when it is linked to identity and put to use requires careful consideration. The release or potential for inadvertent association of such information to anyone must be balanced against a threat to their privacy that risks discrimination and exclusion. The design of the identity systems themselves plays a role in the exposure to and impact of such a risk. A simple, but important question to consider includes: How much information should an identity system collect? Eligibility, permissions and rights of access do not require the release of data: They require a trusted confirmation of a claim.
Openness and transparency
Today, numerous approaches to foundational national digital identity systems are supported by complex ecosystems of service providers, data stores, networks and interfaces with services, the latter expanding with government ambitions to underpin businesses as well as public services. As these systems facilitate new opportunities for managing efficiencies and governing for public good, they are prompting growing demands to also facilitate transparency of purpose, access and use of data, and user control. Currently the individual users and organizations relying on these systems must make trust assumptions that reflect a reliance on all parties, including governments, to be lawful, competent, honest and transparent in their management.
The collective body of research in this field is advancing rapidly, creating new opportunities to reduce the reliance on such trust assumptions. Data science, for example, is advancing new techniques for encryption and privacy-enhancing data management, and refining algorithms that can monitor, learn from and react to anomalies. There is also growing influence from open- source software projects such as MOSIP, an open-source foundational identity platform provider, working to advance the interoperability and transparency around the functionality and purpose being developed within these systems.
As these technologies advance, they are helping to inform politically sensitive debates, including fundamental questions around whether systems architecture should be centralized, federated through individual services or decentralized. Many advocate that decentralized systems are most secure and that individuals would be best protected by curating their identity data within ‘self-sovereign’ identity systems. Assumptions are also being challenged: data vulnerabilities can be introduced through the devices used to access the self-sovereign systems; central systems can be designed to behave as a decentralized one—and vice versa. Projects on widely-used open collaborative development platforms, such as GitHub, are often regarded as inherently insecure, but they can attract the attention of many eyes offering unrivalled levels of both expert and public scrutiny. By contrast, strategies to protect protocols and methods of a system by limiting access within closed or proprietary environments are now regularly challenged by researchers, bug bounty hunters, and the hackers that continue to breach them. Having security protocols and systems openly subject to scrutiny and improvement available in open systems can mitigate the likelihood of failure and may well become regarded as vital.
As nations embrace digital identity, these systems will have to be thoughtfully designed to be trustworthy, informed by not just an appreciation of the risks to the systems, but also the individuals and communities that rely on them. Often people speak of the need to develop trustworthy identity systems with a focus on the technical security of the systems and data. The considerations are far more complex. Engendering trust in systems that underpin so much of society calls for a solid understanding of the impact their introduction will have on an individual’s access to the life sustaining resources and services they support. It’s a challenge that is inspiring a vibrant area of research and development into the technical choices and technologies that will be underpinning countries and economies well into the future.
Professor Mark Briers, Director of Defence and Security program, The Alan Turing Institute