5 ways that Digital Twins could destroy society

Ian G
18 min readJul 30, 2020

2020 has taught us to re-evaluate both ‘normality’ and our expectations of the future. To ask whether it is safe, sustainable, or even fair.

Our expectation of a faster, cheaper, more gratifying future have long belied the trade-offs that we make in terms of our time, our autonomy, and our moral calculus. Disruption and innovation cannot exist in isolation, they need both purpose and ethics.

As we consider the potential of the Digital Twin to alter how we ‘normally’ experience, use, and benefit from the infrastructure and the built environment around us, so we must consider the long-term consequences and unseen changes that this collection of technologies will manifest in our lives and our society. Let’s take this moment to think about what might go wrong, whilst we still have time to troubleshoot.

Part 0: PreFace-book

Let’s start with Facebook. Like some of you, I signed up for Facebook back when you needed an academic email to register and its users numbered in the single digit millions, not billions. My original account was deleted in ~2007 because I was marketing my alt-folk club night a bit too aggressively. But I quickly created a new account, how else was I going to keep in touch with my old high school friends?

Me, on Facebook in 2008 (yes, I was always this cool).

At the time we all had plenty of reasons to be optimistic about this new form of social media. Sure, I missed the chaos of Myspace (remember when you could edit HTML and CSS on social media?), but the structure that Facebook brought to building and curating your network and interests meant that, come what may, we’d always have someone to connect to. Perhaps it was the end of loneliness? Or the start of a new global, digital consensus?

Sat in a Dubai hotel room on a work trip in the winter of 2011 watching the beginnings of the Arab Spring on Al Jazeera it felt like Facebook was the beating heart of liberty, fraternity, equality…

You don’t need me to tell you what happened next, but I will summarise anyway. Over the last decade Facebook have successfully monetised and amplified the worst of human nature. It has manipulated our attention, our psychology, and our insecurities leaving us needy, vulnerable, and more alone than ever. At the same time as it has undermined our self-confidence, it has simultaneously provided a platform where extremists can prey on our worst inclinations, and rogue states can manipulate our elections, and populations can be turned against minorities (see the Rohingya genocide). I can’t remember the last time I logged on to enjoy the ‘social’ part of social media, but I know I’m happier without the app on my phone. You would be too.

Mark Zuckerberg never designed Facebook with this outcome in mind, but it was nonetheless the natural outcome of the platform that he helped create. As Adam Greenfield writes in ‘Radical Technologies: the design of everyday life’:

It doesn’t matter whether some technology was intended by it’s designer to enslave or to liberate, to preserve or to destroy. All that matters is what it is observed to do, and we ought to evaluate it on that basis alone.

You: Are you getting to the Digital Twin thing, Ian?

Me: Yeah, hold up, I’m nearly there.

It is obvious in retrospect, particularly when technology becomes as transparently harmful and amoral as Facebook, but we rarely think about the consequences of the technology that we unleash upon the world until long after the warning signs appear. In the moment, techno-utopianism is the default. Every company is a tech company now, and everyone’s got something revolutionary to sell.

It feels presumptive, perhaps foolhardy, to try and predict what could go wrong in the early days of a technology. After all, the tech is nascent, failure is common, few people are worried about the long-term consequences of Segway or Juicero because, frankly, there won’t be any.

But logically, it’s best to think about things early in their development so that we can try and address some of the negative consequences of our innovations when it is cheap and easy! Especially if it becomes apparent that a concept has legs. Digital Twins, in infrastructure at least, may come to naught. Perhaps in a few years the whole thing will feel like a naive, misguided idea. But perhaps not. Perhaps they will become more important than we could ever have imagined. So let’s indulge in a bit of a pre-mortem, while we still can, for the Digital Twin. If, at some point, we learn to regret ever having created these connected digital representations of the human world and how it operates, why will that be?

You: But wait, I thought Digital Twin was just fancy BIM, how can that be dangerous?

Me: The key difference is the focus on the operation of the infrastructure, not it’s design. That is to say, not how the infrastructure exists in pristine isolation, but how it interacts with our human world…

Part 1: building the panopticon

At the centre of the panopticon, everything and everyone is visible at all times.

For today’s purposes a Digital Twin is a digital representation of a physical thing that one can query (a definition that I will admit to having stolen from the Digital Twin Fan Club podcast guests Dan Rossiter and Erwin Frank-Schulz).

A National Digital Twin is, to quote the CDBB:

[An] ecosystem of connected digital twins which can enable system optimisation and planning across sectors and organisations.

This means that the remit of Digital Twins stretches from a model of component assets that are part of a system or network (for example, a substation on the electricity grid, a bridge on a transport network, or a valve on the water network) all the way up to a model of the network of networks (e.g. how the electricity grid powers the transport network that water engineers use to get to site). For the purposes of societal impact, Digital Twins are most interesting when they operate at the scale of society. This is obviously the most advanced form of Digital Twin, and requires twins of all the component parts to function properly. It is also the manifestation of Digital Twins that demands a maximalist approach to inter-connectivity. As Digital Twins evolve along this continuum, from asset to network to network of networks, they will start to cross the threshold where they materially impact the lives of normal, everyday human beings like you and I. Moreover, as inter-connectivity increases, so does the likelihood of bugs or biases in one Digital Twin influencing the behaviour (and failure modes) of other twins.

As the CDBB write in their white paper ‘Flourishing Systems’

Infrastructure and society are becoming more connected at an ever-faster pace so risks of failure can cascade faster and wider than ever before.

As humans, our relationship to Digital Twins is mediated by the infrastructure that we interact with, and the organisation(s) that manage that infrastructure.

As members of the general public, we may never interact directly with a Digital Twin. Instead, Digital Twins will be used to regulate our experience of the world around us. In the background, infrastructure owners will use Digital Twins to inform their decisions (e.g. where to invest, what to build, how to build, when to build, how to maintain their assets, etc.) but things will only get truly interesting once those decisions start to impact us as normal human beings (the centre of the Venn diagram above). This might be something as simple as whether or not our train is delayed in the morning, or something as pivotal as what infrastructure we have access to, and how we are treated in the event of calamity or disruption.

Crucially, we will not be aware of the decisions that the Digital Twin is making and their implications for our welfare. We may not even be aware that decisions are being made, it is easy to perceive things like train schedules and road closures as simply the cold heartless functions of an inhuman ‘system’. Nor, given the complexities of the interaction between infrastructure and society will it always (or perhaps ever) be possible to prove that the outcome of the Digital Twin unduly helps or hinder a particular group. On the scale that the Digital Tiwn operates, there will rarely be a control group.

As Adam Greenfield writes:

Human discretion is no longer adequate to the challenges of complexity presented to us by a world that seems to have absconded from our understanding.

For this reason it is crucial that the ethics of networked Digital Twins are discussed in advance of their creation and in the abstract, because once in operation much of their logic will be invisible to their creators. Without considerable effort to the contrary, it is most likely that a world run by Digital Twins is one that perpetuates rather than dismantles existing power structures and patterns of inequality. We have, in many ways, already used the internet to build a panopticon; but with IoT and the Digital Twin we give the prison guards of the panopticon unprecedented agency over the physical world.

Part 2: reinforcing power through obfuscation

Digital Twins have become possible by virtue of the availability of cheap, connected, scalable computing (e.g. The Cloud). 'Cloud' is of course a metaphor, one that brings connotations of a light omnipresence, something gracefully drifting above the messy world of legacy of IT. However, the metaphor of the cloud belies the very real application of capital required to build a cloud. It is no accident that over the last decade we have seen a concentration of computing power in the hands of a few huge corporations (Amazon, Google, Microsoft) that takes us back to the early days of computing and the mainframe competition between Rand and IBM. Concentration of power, as concentration of computation, cannot help but reinforce existing power structures within society. As James Bridle writes in ‘New Dark Age’:

The cloud shapes itself to geographies of power and influence, and it serves to reinforce them. The cloud is a power relationship, and most people are not on top of it.

As we create Digital Twins we need to consider the value function that we are seeking to optimise for. Any sort of real world optimisation will have some winners and some losers (either relatively or absolutely). If we optimise for private sector profitability we will have a different outcome compared to optimising for perceived social good. As we see with Facebook, a platform that seeks to maximise engagement to drive advertisement revenues will surface very different content- and drive very different behaviours- compared to a platform with a different source of revenue (for example, government funding).

In his book 'Platform Capitalism’ Nick Scrinek identifies five different types of platform, namely:

  1. advertising platforms (e.g. Google, Facebook)
  2. cloud platforms (e.g. Salesforce, AWS, Microsoft Azure, GCP, IBM)
  3. industrial platforms (e.g. GE, Siemens)
  4. product platforms (e.g. Rolls Royce, Spotify, Netflix)
  5. lean platforms (e.g. Uber, Airbnb)

At present it is an open question which type of platform will deliver the most effective basis for Digital Twins. It only seems logical in retrospect that our search engines are powered through advertising, our music and TV is a subscription product, and our holiday rentals are lean platforms build on millions of independent home-owners.

Whichever type of platform comes to dominate- and network effects suggest that one eventually will- the incentives that come with that platform type will shape how the twins function and what they seek to optimise for.

At the moment, the fully realised Digital Twin could potentially fit with each of Scrinek’s platform types. It could conceivably be:

  1. An advertising platform: perhaps as an incremental expansion of apps like Waze or Citymapper where, by sheer volume of users and sophistication of their Digital Twins they can drive the adminstration of the nation’s infrastructure to meet their needs (and the needs of their users).
  2. A cloud platform: a fully-configured SaaS solution sold to infrastructure organisations that plugs into their existing ERP and EAM systems, runs on AWS and uses commodity machine learning services.
  3. An industrial platform: an extension or re-imagining of the huge SCADA control systems already used by infrastructure owners as Digital Twins augmented by ever greater quantities of IoT sensors connected to neural networks.
  4. A product platform: where infrastructure is purchased as a service, and the manufacturer of the roads or rails takes responsibility for keeping it running smoothly using a Digital Twin (in many ways this is just a smarter version of a PFI contract, or a rolling stock lease).
  5. A lean platform: mobility-as-a-service, where the focus of the sector moves from private vehicle ownership and public asset ownership, to a more hybrid model where the infrastructure and vehicles are owned by a hodgepodge of private companies who then rent it back by the mile to users and use Digital Twins to optimise their service.

Whilst these scenarios are all very different in structure, they all rely on the owners of vast quantities of capital investing in the development of Digital Twins to improve the efficient function of the overall system. This will, inevitably, deliver substantial power and profits to the owner of the platform (and thus the natural monopoly). Whether any of these models can be expected to operate in the public interest is questionable. This in turn begs the question as to how and when the government should intervene to ensure that Digital Twins are used to further the public interest.

Part 3: Right now, in Xinjiang

Arguably the most sophisticated Digital Twin operating in the world today is the vast network of connected technology and physical infrastructure that the Chinese Communist Party have put in place to repress the Uighur minority. This is a government-funded Digital Twin optimised for the projection of power. The New York Times investigation reveals the Party’s application of big data, facial recognition software, and compulsory smartphone apps to track and imprison ~11million Uighurs within ‘virtual fences’ (and at least 1 million in physical camps). As the Times writes, the government is investing billions in technology that:

“taps into networks of neighborhood informants; tracks individuals and analyzes their behavior; tries to anticipate potential crime, protest or violence; and then recommends which security forces to deploy.”

The government has created a Digital Twin, unprecedented in scope and function, where communities, cities, and even geography itself are distorted by technologies to become a digital panopticon. This is a reality for millions of people in our present day, where acts as simple as going to pray, switching off your phone, giving up smoking and drinking, growing a beard, or leaving your house by the back door are considered suspicious, subversive, warranting ‘re-education’. It is also a reality that is only possible using connected technology. In the past this level of inhumanity has required the very visible creation and maintenance of massive prison systems, one that is quite difficult to obfuscate or hide from international scrutiny. And while re-education camps are being built at scale in Xinjiang, their footprint pales in comparison to the creation of a virtual prison eight times the size of the United Kingdom.

Much of the technology that underpins these horrors was created in democratic countries, by well-meaning programmers, academics, and vendors. The horrors of Xinjiang teach us that we cannot simply build Digital Twins for use by the (relatively) benign institutions in our own countries. We must actively work to ensure that the same tools that help our transport operators to reduce delays, and our utility companies to improve levels of service, cannot be appropriate by others to deny fundamental freedoms to people on the other side of the world (or, indeed, to subtly profile and impair the opportunities of vulnerable populations in our own country). This is no easy task. But too often we assume the technology is inherently ethical simply because we admire (some of) the people who create it, or because we can thing of admirable applications. In the real world this has proven, time and time again, to be a naive and dangerous conflation.

Part 4: the trolley problem ad nauseam

The intention behind Digital Twins isn’t just to understand the world, but to influence it and change outcomes. This is no academic exercise, it’s an attempt to build the world’s most complete control system, to create levers that fundamentally alter how our infrastructure behaves on a systemic basis. Inter-connectivity, and control are fundamental features of the sales pitch for infrastructure Digital Twins, as the CDBB writes:

A system of digital twins that can communicate with each other securely will increase the performance and resilience of the built environment.

Creating these levers that allow infrastructure owners to influence the systemic behaviour of infrastructure is key to the value proposition underpinning investment in Digital Twins. But with levers come agency, and with agency comes accountability.

There’s a prominent lever in the well-known moral dilemma the ‘trolley problem’, best illustrated by The Good Place, where the choice to intervene (or not) in the behaviour of infrastructure is a matter of life and death. One can pull it, or not, and the consequences of your decision materially affect the outcome experienced by other human beings.

Is the Digital Twin improving matters here?

We have witnessed the outcome of real life trolley problems a lot recently. One pertinent example: The concentration of air pollution around densely packed heavily used inner city roads mirroring the concentration of vulnerable and minority populations in those same areas, with the implication that those populations will suffer more at the hands of COVID. These are the sort of long tail outcomes where urban planners and local authorities would, in the past, have claimed plausible deniability (e.g. the world is a complicated space and we only control a small aspect of it, we didn’t set out to create inequality).

There is no plausible deniability in the Digital Twin world. You know the outcomes of your decisions as you make them.

Arguably this presents us with an opportunity. It’s better, after all, to know that you are playing the trolley game, than not. It is better to recognise the consequences of your actions than not. The proponents of Digital Twins (and I usually number amongst them) will argue that they will allow us to optimise the function of our infrastructure to maximize 'value’. The problem is that we as society have never agreed what 'optimum' looks like, and the powers that wield Digital Twins are likely to weigh the variables very differently from the preferences and interests of vulnerable and disadvantaged groups. A Digital Twin of an urban transport network could quite easily optimise for traffic flow over the wellbeing of marginalised populations located next to increasingly optimised (which is to say, busy) thoroughfares.

In the abstract, I am arguing that to be safe a Digital Twin must reject utilitarianism, that they must follow Rawls in rejecting:

“the justification of inequalities on the grounds that the disadvantages of those in one position are outweighed by the greater advantage of those in another position.”

We don’t want Digital Twins to deliver the ‘greatest good’ for the greatest number of people, we want Digital Twins that improve the outcomes for the least advantaged members of society. Is this feasible? Not in the way that we currently configure our infrastructure organisations. The KPIs of organisations such as Network Rail or Highways England are inherently utilitarian (e.g. minimise delay, maximise availability). A powerful Digital Twin calibrated with these KPIs as its reward function would be a dangerous thing indeed. Digital Twins, as a manifestation of AI, will follow the same cold logic that all algorithms follow. Applied to infrastructure that is designed to safely carry human life, these algorithms might interpret the old truism 'the trains would always be on time, were it not for the passengers’ a bit too literally.

Part 5: QAnon and the fear of the controlling mind

Thus far we have established that:

  1. Large parts of our internet are fundamentally broken, and harmful to us both individually and collectively.
  2. The evolution of the part of the internet of things that we call the Digital Twin is a potentially transformative technology, but one that will often manifest as a black box that may tend to benefit entrenched interests and perpetuate existing societal imbalances in power, opportunity, and ultimately agency.
  3. The inter-connectivity demanded by a network of Digital Twins means that the failure and biases of individual components can cascade throughout that network.
  4. Related technology platforms are already being used to perpetrate terrible crimes in other nations, and it’s not hard to imagine similar malicious applications of Digital Twins.
  5. Even well-intentioned and morally justifiable use cases are likely to result in difficult choices and unintended repercussions for the people and organisations with their hands on the new levers that Digital Twins make available.

This is a lot to consider, even for those of us who try to apply some degree of rationality to how we consider the benefits and risks of technology. But few of us are truly rational at all times, and large parts of the populace have abandoned rationality in its entirety.

To be fair, having a general sense of unease about an increasingly interconnected world isn’t entirely irrational.

Digital Twins are probably a bit too far from the popular consciousness (for now at least) to yet become fodder for conspiracy theories. But don’t expect that to continue. Both science fiction and modern urban development have already primed us to accept that our built environment isn’t always passive, but can actually actively threaten us. The lessons from Grenfell show us that negligence and lack of care can be just as insidious as malice. If the outcomes of using Digital Twins to optimise infrastructure and the built environment are perceived (rightly or wrongly) as unfair or actively threatening to large parts of the population, then this may prompt a reaction to their use. This seems particularly likely with technology that potentially reduces the perceived ‘freedoms’ of individual people. If it’s hard enough to persuade people to accept that they should wear masks supermarkets, what chance to we stand persuading people to let a black box decide how and when they can interact with the built environment?

Paranoid and conspiratorial thinking seeks to apply simple explanations to complicated phenomena. We live in a world where millions of people believe that vaccines cause autism, or that 5G networks cause COVID. We need to be able to proactively demonstrate the ethics and clear purpose of our Digital Twins if we want to avoid their becoming the next scapegoat.

Solutionising

Despite the occasional bleakness of this article, I believe in the opportunity for society presented by Digital Twins. The very fact of my worrying about the implications of Digital Twins is testament to the power that I expect this technology to wield.

So how do we create Digital Twins that do not perpetuate and exacerbate inequality?

It feels like we have two advantages that will help us to avoid repeating history:

Firstly, the benefit of hindsight. We have seen both the dramatic successes and failures of the internet, and understand the Digital Twin as an extension of this story (as beautifully expressed by Neil Thompson). We know now that the internet isn’t broken because someone set out to break it. Instead, it is a reflection of the interests of the embedded interests that bankroll it. We see in hindsight that the network effect was always going to drive the internet towards monolithic platforms, and that business models based around advertising would then give those platforms the incentive to monetise their only real product… us. We know that Digital Twins will likely face the same tendency towards economies of scale, and the same need to deliver 'profit’ (or at least ‘value’), and yet we have the chance now to set the rules of the game so that infrastructure does not become a product by a few monlithic inscrutable tech companies.

Secondly, in the infrastructure sector we have a nascent community of practice that isn’t yet hopelessly tied to platform capitalism. We will need the constant advocacy of bodies like the CBDD to ensure that we build open, collaborative, 'flourishing systems’. In their recent White Paper, ‘Flourishing Systems - Re-envisioning infrastructure as a platform for human flourishing’, the CDBB argues for creating a built environment with:

a focus on outcomes and human flourishing, because infrastructure provides essential services on which people and society depend.

This human-centric approach requires us to find a way to define, measure, and optimise for societal wellbeing (preferably using Rawls’ definitions). This means not using ubiquity or adoption as shorthand for societal good, but understanding the effect that the technology has on the whole populace, including those unaware or unable to influence the decisions that Digital Twins are making on their behalf. The paper also recognises the need to balance connectivity against the risk of systemic failure, and sustainability against the desire for ever more development and consumption, both important distinctions that are all too often glossed over in the rush to techno-utopianism.

The Digital Twin can and will be what we make of it. Infrastructure and the built environment, so often a technological backwater, can take advantage of the lessons from other sectors. Few people join this industry to get stinking rich, most of us believe in something bigger than ourselves, let’s make sure that our Digital Twins do too.

--

--