Technologies embody inherent risk and fallibility
Users of any computerized system should remember that there is always the potential that a technology-based system can be hacked.
Hackers increasingly are able to compromise cars, smartphones and medical devices, due to the ubiquity of wireless devices and open computing development environments.
Avi Rubin, a professor of computer science and director of the Health and Medical Security Lab at Johns Hopkins University, warned of the dangers of an increasingly “hack-able” world during a TED talk a couple years ago.
However, today’s informed consumer does not really need to watch a TED talk to know about the technological dangers that surround property crime, privacy breaches, identity theft and hacking are real. Recently, a new $5 device has started to pop up around North America that easily unlocks car doors for thieves. In the United Arab Emirates, it is estimated that approximately half of smartphones have been hacked. Even, former U.S. Vice President Dick Cheney feared that his pacemaker might be remotely disabled by terrorists in an assassination attempt.
These anecdotes demonstrate that there are always endemic risks associated with technologies that we use everyday. While technologies have the capability to improve our lives, we must remember that technology is always in a constant state of evolution, subject to error, and thus engender “inherent risk”.
In short, with increased technology adoption, we can expect increased problems. In the telecom sector, for example, a recent Trend Micro report shows that malware in the Android ecosystem has grown exponentially since its release, with there now being approximately 720,000 malware apps since Android’s debut.
Risk is especially associated with new, emerging technologies such as biometrics. A few years ago, the National Research Council in the United States released a report that found that biometric identification technologies are “inherently fallible” across all technical modalities (fingerprint, palm prints, face and voice recognition) and require a greater amount of research across all levels of design and deployment.
The report noted that: “For nearly 50 years, the promise of biometrics has outpaced the application of the technology. While some biometric systems can be effective for specific tasks, they are not nearly as infallible as their depiction in popular culture might suggest. Bolstering the science is essential to gain a complete understanding of the strengths and limitations of these systems.”
Biometric systems are increasingly used to control access to facilities, information, and other rights or benefits, but questions persist about its effectiveness as security or surveillance mechanisms. Such systems provide “probabilistic results,” meaning that confidence in results must be tempered by an understanding of the inherent uncertainty in any given system, the report says.
The report notes that when the likelihood of an impostor is rare, even systems with very accurate sensors and matching capabilities can have a high false-alarm rate. This could become costly or even dangerous in systems designed to provide heightened security, as operators could become lax about dealing with potential threats.
As a consequence, such systems must be not rely only on technological efficiency, but also heavily depend on societal oversight. Since all systems are prone to risk and technological failure, all systems must actively include and apply the “human factor.”