burgerlogo

A Human-Centric Trust Model for the Internet of Things

A Human-Centric Trust Model for the Internet of Things

avatar
Intertrust Technologies Corporation

- Last Updated: November 25, 2024

avatar

Intertrust Technologies Corporation

- Last Updated: November 25, 2024

featured imagefeatured imagefeatured image

For IoT security to be successful, humans must trust the security, safety, and privacy of this massive transformation of the world. Most importantly, “ordinary people,” whether they are consumers or workers, must be able to safely, reliably, and intuitively interact with vast, complex, interconnected systems of IoT devices.

It can be overwhelming to think about all the ways individuals and society can be damaged by the haphazard engineering of systems that merge the physical and digital worlds.

Technologists have done a terrible job with security technology so far, yet now we are about to impose those failures onto the physical world on a scale that only ubiquitous, pervasive computing and connectivity can accomplish.

Continuing the status quo is unsustainable.

The Internet of Things can be thought of as a hyper-connected, hyper-distributed collection of resources. The complex ecosystem surrounding IoT devices means trusting them will not be intuitive. These connected devices can potentially be controlled and observed by others anywhere on the planet.

For example, before IoT, it was always easy to physically check the locks on your doors and decide to trust those who had the keys. Now, with Internet connected “smartlocks,” you can check or alter their state from anywhere.

How can an “ordinary person” track who has the electronic key and discern that the software controlling the lock is secure and resistant to hacker attacks? A February 2017 survey of IoT consumers showed that 72% were not sure how to check if their devices had been compromised.

Whether home automation devices or industrial devices, technologists have a responsibility to provide people intuitive and simple methods to accurately discern what devices and services can be relied on, and what threats they should rationally worry about.

This poses the question, “How can we get back to a place of relative simplicity of function, where the average user has a reasonable understanding of the integrity of their connected devices?”

The Need for a Human-Centric IoT Trust Model

There currently isn’t an effective and widely adopted trust model to guide IoT device designers and service providers. It is fair to say that currently, designers haphazardly add device connectivity, remote control, and other IoT features to devices, while leaving the user with risks that are hard to understand and manage.

An effective trust model will clarify device providers and service providers’ responsibilities and point to ways in which we can ensure that people can use IoT devices with little worry.

Currently, there isn't a reliable and complete inventory of threats for the Internet of Things, nor have the threats that have been identified been properly prioritized.

As an example, a relatively new threat has burst on the scene over the past few years, called ransomware. In the context of IoT, this should be fairly high in priority. A new trust model that takes this into account is needed to underpin the means for mitigating associated risks.

What is a Trust Model and How Can it be “Human-Centric?”

The word “trust” in this context means reliance. A trust model shows how each entity in an ecosystem relies (or could rely) on another. And human-centric in this context means a trust model aimed at giving effective administration of security, not to computing professionals, but to average users.

With such a trust model one can ask questions like:

  • How can IoT devices be relied on to defend against viruses?
  • If I delegate access to my home sensor information to my power utility, what can they do with the information and how is it protected?

A human-centric trust model can help developers determine things such as:

  • Who and what I can rely on for protection?
  • When I give others access to my devices or information from their sensors, how I can rely on them?
  • How can I limit the ability of others to use those devices?

Scaling a Human-Centric IoT Trust Model

What are the components of this new IoT trust model? The most obvious answer here is scale. We need to address many (billions) of devices containing multiple sensors and controls (sometimes dozens or more per device).

Two things come to mind when dealing with such massive scale:

  1. A scalable trust model needs to place a lot of responsibility on device and application self-defense and provide for distributed security administration.
  2. We cannot rely on network security techniques since they subject an ecosystem to weak-link vulnerabilities. Once any network is penetrated, the attack can work its way to multiple networks by exploiting devices that overlap with other networks.

Another property of an IoT model that helps deal with massive scale is the use of services and distributed applications that help individuals visualize and easily administer security for devices.

For example, a homeowner or factory manager could subscribe to specialized, cloud-based services that scan sensors in their networks for anomalies or behavior signatures that indicate illicit behavior. It would also be necessary to consider how to make this information accessible and comprehensible to the average user or worker.

If a device is “IoT enabled” by merely adding a generic computation and communications stack with a generic operating system that enables arbitrary applications and device interactions, then you are at risk for security problems, even with so-called simple devices.

However, if the system design is guided by a trust model for governing interactions and functionality, then designers can more easily keep things simple and limit risks.

The trust model can also call new features to be safely added when a need is identified rather than loading a device with potentially exploitable features. In addition, devices can be asked to implement a relatively simple reference monitor that accepts commands from other devices on a very limited network or from a limited number of other devices.

More generally, IoT device designers should keep functionality limited and explicitly enable new features only after fully vetting the inherent security risks.

What Would an IoT Trust Model Look Like?

This article won’t prescribe a detailed plan for a trust model. But, it makes sense to enumerate some of the components of a trust model that address some of the unique challenges for the IoT. Below are 7 points that will help identify various components of such a model.

1. Devices and Hosted Applications

When I bring an IoT device into my environment, what aspects can I rely on for security, safety, and privacy? What are the intrinsic properties and capabilities of the device that make it trustworthy?

2. Resources

An IoT device can have various resources made available to a number of entities through the Internet. They might consist of device controls and state information, as well as streams of information from connected sensors and computation capabilities.

How do I know what those resources are and who has access to them? How do I govern access to the device?

3. Trusted Attributes

Consider this context: if I give a youngster access to some home automation capabilities, I might want to be reminded that this action includes a hot water temperature control and isn't considered child safe by the developer.

Sensor data might have attributes. Some data may be sensitive (such as motion data with time stamped GPS coordinates) and derivatives of that data might be claimed to be anonymized.

How can such data be reliably labeled? How can proper usage of labels be ensured?

Classification and labeling can be complex and has liability implications, but must be addressed as part of an IoT trust model.

4. Delegating Trust

When I bring a device home, I claim it as mine, perhaps with some straightforward gesture. Only I can control it and be privy to the data it collects.

But, if I want to give others access to it, how can that be done reliably and with full understanding of the implications?

5. Virtual Composite Devices

These human-centered difficulties need to be considered in IoT trust models because physical devices can be virtualized and/or be parts of virtual composite devices, the components of which may interact.

In home automation, such composite devices may be called “scenes” where multiple devices cooperate to perform a certain household task. In an industrial or metropolitan context, composite virtual devices will be arbitrarily complex.

6. Automated Performance Aids

These are systems that can help us understand the implications of actions such as including something as a component in a virtual device or system, or the implications of delegating trust to some entity.

These will be an important part of a human-centric trust model that addresses both the scale and complexity of the evolving IoT.

7. Identity Management Systems

For these automated performance aids, as well as other IoT related systems, to properly function, the right device or group of devices and the right entities who are to be trusted need to be identified. This will require identity management systems that are vastly larger in scale and much more intuitive.

Here again, it is fair to say that the current inventory of identity management systems (such as username/password pairs, and X.509 and SAML certs) are woefully inadequate and rarely address many of the already known Applications for identity.

While advances are being made in some aspects of identity management (notably biosensors), the territory that must be covered here is vast.

The Role of Security Associations and Reference Monitors

Trust models will have various layers. One layer will address the secure actuation of a trusted process. This layer will use the concept of security association and will need to be made both reliable and intuitive.

One way (of many) this might be actuated is by causing an electronic key to be securely transmitted to both the lock and to my friend’s mobile phone. The lock will keep a security association between those keys and a permission to open the door.

Now my security association with the lock gives me the right to modify the security association table, but my friend’s security association with the lock does not. That is, I have delegation rights and she does not.

A reference monitor is typically implemented as a core (or kernel) process that checks each command against a list of security associations for permissions to take an action or access to some resource. Now, when my friend wants to open the door, the lock’s reference monitor will evaluate her command, and use of the electronic key I gave her, and perhaps the identity of the device she used if it is part of the security association. Much of this will usually be hidden from the user in a trust model layer.

Yet another part of an IoT trust model will be the concept of a secure update process. This is an area that has seen some success, at least in some contexts. That’s good, because the need to fix things that can potentially go wrong will surely be great as we integrate the physical world with the cyberworld. Again, the scale of IoT and its multitude of contexts will be challenging.

In this article, communications security hasn’t been covered, and as alluded, we may not want to include comsec processes as an intrinsic aspect of a trust model.

Sometimes they will be part of the security actuation layer, but given the overall context of IoT and the myriad communications processes that may be both intrinsic and extrinsic to devices and systems of devices, in general an effective trust model will have to be actuated at the device and application layer, and not require isolation from communication processes.

The Inherent Limitations of Models

The final point to be made regarding IoT trust models is that a model is not reality, nor is it even virtual reality. But humans can use the models for both the design and use of IoT devices and systems, and for understanding how they can be projected usefully into everyday contexts.

There is a lot to do to scale the modeling process and properly connect it to the human experience. This may include standard names and references that people can understand unambiguously, and universal design paradigms that allow people with different capabilities to interact with the IoT conveniently and safely.

For now at least, technology communities can begin working together to model how the attributes of safety, security, and privacy can be assured without providing an undue burden for people. We need to make it simple for humans of all capabilities to properly implement IoT security.

If not, we run the risk of the infrastructure of simple things we increasingly rely on continuing to fail on an ever-expanding scale.

 class=

Written by David P. Maher and adapted from his original post on Oreilly.com.

Need Help Identifying the Right IoT Solution?

Our team of experts will help you find the perfect solution for your needs!

Get Help