Scanning the signals
As technology grows more complex, the ways in which it can be misused rise. And as people rely more on technology in daily activities, they are increasingly subjected to unintended — even hostile — consequences. Add in a high level of automation — taking humans ‘out of the loop’ and making decisions at machine speed — and the possibility for things to go wrong rapidly escalates.
‘Hostile’ tech by our definition can encompass not just criminal tech such as malware and hacking tools, but also use cases like advertising and customer targeting. Whether a technology is hostile can be a matter of perspective. Some people don’t find internet advertising, tracking cookies or social media influencing campaigns intrusive, and are happy to trade their data for what they perceive as personalized offers or special value. Others install ad blocking software in their browsers and eschew Facebook completely. Consenting to tracking or the collection of personal data is for some basically automatic; for others, a carefully considered choice.
What’s more, not all hostile behavior is malicious or intended. One example is bias in algorithms or machine learning systems. These may exhibit hostile tendencies towards certain customer groups without ever having been compromised or deliberately designed that way. Signals of this shift include:
- The increasing ubiquity of technology and concurrent expansion of the potential threat surface. One simple example is the sheer number of connections: IDC predicts the number of active Internet of Things (IoT) devices will grow to 55.7 billion by 2025. Each of these comes with potential security breaches that could be exploited
- Evolving consumer sentiment and behavior towards ad and marketing tech, and increasing bifurcation between those who accept broad uses of their data and those who are more concerned about privacy
- Rising anxiety about the use and impact of social media in political campaigns, and how social media channels are shaping political and other societal debates
- Unintended consequences arising from increased use of AI and machine learning, such as bias in algorithms. Concerns about hostile impacts are prompting attempts to control the use of AI in processes like hiring
- Increased regulation around data collection, retention and use, such as the European General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA), and equivalents in other jurisdictions
The Opportunity
Protection against deliberate hacking and malware is increasingly important. Companies must invest in defending a wider range of touchpoints against well-funded and organized adversaries. Yet as the potential for danger rises, other dimensions of hostile tech also have to be considered. We believe that being respectful of customer wishes, avoiding ‘spooky’ targeting, and rooting out bias within algorithmic systems is not only inherently the right thing to do, but conducive to trust, positive public perceptions, and ultimately the health of the business.
According to IBM, the average global cost of a data breach in 2020 was US$3.86 million. In the first half of 2020 alone, European supervisory authorities issued fines totalling over €50 million for GDPR violations. With consumers placing a higher value on their privacy, robust privacy practices have become a strong differentiator for some companies. A recent survey by McKinsey found a clear majority of consumers will not do business with a company if they have concerns about its security practices, or believe it gives out sensitive data without permission.