As adoption of Internet of Things devices increases, so does the number of insecure IoT devices on the network. These devices represent an ever-increasing pool of computing and communications capacity open to misuse. They can be hijacked to spread malware, recruited to form botnets to attack other Internet users, and even used to attack critical national infrastructure, or the structural functions of the Internet itself (we give several examples from recent headlines in the Reference Section, below).
The problem this poses is what to do about IoT as a source of risk. This blog post includes reflections on events that came to light in recent weeks, sets out some thoughts about technical mitigations, and sketches out the boundaries of what we think can be done technically. Beyond those boundaries lie the realms of policy measures, which – while relevant to the big picture – are not the topic of this post.
Why are we exploring this issue now? Partly because of our current campaign to improve trust in consumer IoT devices.
And partly, also, because of recent reports that, as a step towards mitigating this risk, connected devices will be subjected to active probing, to detect whether or not they can still be accessed using default user IDs and passwords. Here is one such report, from the IEEE.
We believe active probing raises practical, privacy, and security risks which ought either to rule it out as an approach, or ensure that other, less risky options should always be considered first.
Remote devices: control, ownership, and responsibility
Much of the power of a distributed denial-of-service (DDoS) attack comes from the ability to recruit devices all over the planet, regardless of the physical location of the attacker or, indeed, the target. One countermeasure is to make it harder for a malicious actor to gain remote control of an IoT device.
Gaining control of a device involves (or should involve) authenticating to it as an authorized user. IoT devices that either have no access control, or have access control based on a default password, have little or no protection against such a take-over. It is therefore often suggested that an early step towards securing connected devices is to ensure that users replace the default password with one that is hard to guess.
This step, though, is not without obstacles. Users are notoriously bad at choosing and changing passwords, frequently choosing trivial ones if they bother to set passwords at all, and of course, sometimes not even realizing that they should set a password in the first place.
Consumers’ behavior might also be based on an assumption that their devices are safe. They might assume, by default, that their Internet service provider (ISP) or connected-home solution provider is not supplying a device that puts them at risk through poor security – just as they might expect the device not to catch fire in normal use.
Multiple stakeholders, expectations and requirements
As you can see, we already have a problem whose solution may require action by more than one stakeholder:
- Device manufacturers need to design their products to require some form of access control, and to prompt the user to enable it
- Users need to have the awareness and discipline to use the access control mechanism and, if necessary, remember and replace passwords when needed
- Users may, under some circumstances, assume that “someone else” is taking care of keeping their devices safe and secure
And all this has to be done in ways that reconcile the triangle of requirements, from which, traditionally, you can “pick any two.” The resulting control must be:
- Secure (otherwise it has missed the point)
- Usable (if it is too hard to understand or inconvenient, users will ignore it)
- Manageable (it must be possible to repair, replace or update the control without compromising the usability of the device, or the security and privacy of the user)
In the IoT context, two further issues must be addressed.
First, whatever the solution, it must be affordable. Otherwise, “secure but expensive” products will tend to lose market share in favor of “insecure but cheap” competitors, and the risk represented by insecure IoT devices will continue to grow.
Second, the process as laid out above has a flaw, namely, “that’s not where we’re starting from.” Connected devices with poor security are already widely available and deployed in vast numbers. In those cases, it’s too late for manufacturers to design security into the product, so we need to look for alternative means to mitigate the risk of IoT devices as a threat vector.
Choosing the appropriate intervention
If the device has simply been designed without appropriate security mechanisms, and without the means to add them once deployed, and if it presents a significant risk to people’s security or well-being, there’s little to be done other than try to withdraw it from the market. (For instance, in 2017 German authorities issued a ban against a connected doll, on grounds that it was a de facto surveillance device, and could also put children at risk.)
If already-deployed devices can be secured by user action, the question becomes one of deciding how this can best be achieved. We think there will be a range of options, some more appropriate to different kinds of connected device than others.
General public-awareness campaigns, aimed at informing consumers about the importance of good password practice, may be ineffective or insufficiently accurately targeted to be relevant; but how do we increase the accuracy of such messages without intruding on users’ privacy?
Is it acceptable to target the buyers of specific kinds of device, or specific brands? Should ISPs have the means (or a duty) to scan their networks for those devices and alert their subscribers to the potential risks? Should they even test devices on their networks to see if the default password has been changed? As a last resort, and given the potential threat IoT presents to critical national infrastructure, do even governments have a responsibility in such cases, and is it desirable for them to intervene, either directly or through the ISP?
As the IEEE article notes, in comments from the Information Technology Center of the University of Tokyo, a large-scale initiative like this increases the number of stakeholders who must play a role. It will probably involve the government, an approved technical institute, and ISPs. It may mean governments have to reconcile conflicts between the actions they wish to take, and laws relating to personal privacy, consent, or unauthorized computer access. Those decisions are, as we noted, beyond the scope of this post, except to note that they increase the difficulty of ensuring that the “active probe” approach is manageable, legal and safe.
Conclusions and Recommendations
We recognize that circumstances will vary, and different situations may call for different approaches. Here is an indication of the range of interventions we think can apply. This is not an exhaustive list, but it serves to show that many options are available, and several may be needed.
- Security by design. If all IoT devices were well designed in the first place, their risk would be greatly reduced.
- Secure lifecycle management. Good design includes the ability to manage deployed devices over their whole lifecycle, including secure updates to firmware/software, and secure decommissioning. (This could imply that some processes and protocols need to include a “consent” step.)
- Lab testing of devices. Assess new devices against quality criteria for security and lifecycle management, and provide feedback to manufacturers. This could extend to include certification and trust-marks.
- General awareness-raising campaigns (e.g., encouraging users to change default passwords).
- Targeted awareness raising/call to action (this might be based on the results of lab testing, in the form of a manufacturer’s “recall” notice for unsafe products).
- “Passive” device targeting (e.g., an ISP can detect traffic that indicates an unsafe device, and sends an out-of-band alert to the user suggesting remedial action).
- “Active” device targeting (e.g., an entity scans for device types known to have a security flaw, and notifies the user with suggested actions).
- “Active probe” (e.g., an entity probes devices remotely to identify those that still have default passwords).
As this rough list suggests, many alternatives can be considered before embarking on something as potentially contentious as an active probe – and of the options listed, active probing would require the most effort in terms of governance, management, privacy/ethical impact assessment, and safety measures. Here are just some of our concerns with the “active probe” approach:
- Doing this (or even attempting to) without the knowledge and express permission of the device owner, irrespective of the motivation, is a technical attack on that device.
- The device owner has no way to distinguish a malicious attack from an “authorized,” legitimate one, and might therefore react inappropriately to a legitimate probe, or fail to react appropriately to a malicious one. This may give rise to unintended and undesirable outcomes. For instance, if users are warned via a general announcement that “legitimate probes will be conducted overnight on Thursday of next week”, hackers might interpret that as an opportunity to launch their own attacks, in the knowledge that householders are less likely to react.
- It could result in the creation of a large database of vulnerable devices, which would be both a target and an asset for potential attackers. Creation of such an asset should not be done without caution and forethought.
- It is even possible that an active probe could infringe the sovereignty of another nation: for instance, is it acceptable for a country to probe the connected devices of foreign embassies on its soil, as part of an initiative such as this?
Overall, our view is that the active probe approach carries the highest risk of undermining users’ trust in the Internet, particularly by breaching the normal expectations of the device owners and users, concerning privacy, ownership and control. We conclude that actively testing device security by attempting to log in using well-known default passwords should be a last resort, in light of a specific, identified threat, and used only when other alternatives are not available or practical.
In deciding which of the interventions is appropriate (and successful intervention may need a combination of measures), we recommend applying established principles from other, related disciplines of IT governance:
- Necessity: is there a less risky, less intrusive way to achieve the same ends?
- Proportionality: is the desired outcome sufficient to justify the potential safety and privacy impact of the intervention?
- Consent: has the individual’s informed consent been sought and knowingly, freely given?
- Transparency: is it clear to all stakeholders what is being done and why?
- Accountability: are the outcomes measurable? Is accountability for the outcomes clear – including negative outcomes if something goes wrong?
We recognize that insecure connected devices represent a substantial and growing threat, and one that needs an effective response. However, we also believe that response can and should be graduated, based on evaluation of a full range of options and application of established principles of good governance.
Recent examples of IoT as an attack vector
- Mirai botnet (and related/similar)
- D-Link routers and webcams (and subsequent US Federal Trade Commission action)
- Casino fish tank thermometer
- Hackable cardiac devices
- Webcams
- Jeep (automobile) hack
Other resources
- Internet Society resources on IoT security and privacy, including our IoT Trust Framework and our IoT Security for Policymakers policy brief
- Internet Society resources on Collaborative Security.
- The Global Commission on the Stability of Cyberspace (GCSC) (the Internet Society is a supporter of GCSC and Our Chief Internet Technology Officer Olaf Kolkman is a GCSC Commissioner)
- Olaf Kolkman’s December 2017 blogpost Reflections from the Global Commission on the Stability of Cyberspace
- Call to Protect the Public Core of the Internet
- GCSC Norm Package Singapore (Dec. 2018)