Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Lens six: The expanding impact of hostile tech 

‘Hostile’ technology is commonly associated with criminal activity such as ransomware, breaking into a system to steal data or creating computer viruses — but this misses the complete picture. The landscape is evolving in a way that the definition of hostile tech should be broadened to include legal, even widely accepted, acts which ultimately threaten societal well-being. 


Scanning the signals 


As technology grows more complex, the ways in which it can be misused rise. And as people rely more on technology in daily activities, they are increasingly subjected to unintended — even hostile — consequences. Add in a high level of automation — taking humans ‘out of the loop’ and making decisions at machine speed — and the possibility for things to go wrong rapidly escalates.


‘Hostile’ tech by our definition can encompass not just criminal tech such as malware and hacking tools, but also use cases like advertising and customer targeting. Whether a technology is hostile can be a matter of perspective. Some people don’t find internet advertising, tracking cookies or social media influencing campaigns intrusive, and are happy to trade their data for what they perceive as personalized offers or special value. Others install ad blocking software in their browsers and eschew Facebook completely. Consenting to tracking or the collection of personal data is for some basically automatic; for others, a carefully considered choice.


What’s more, not all hostile behavior is malicious or intended. One example is bias in algorithms or machine learning systems. These may exhibit hostile tendencies towards certain customer groups without ever having been compromised or deliberately designed that way. Signals of this shift include: 


  • The increasing ubiquity of technology and concurrent expansion of the potential threat surface. One simple example is the sheer number of connections: IDC predicts the number of active Internet of Things (IoT) devices will grow to 55.7 billion by 2025. Each of these comes with potential security breaches that could be exploited
  • Evolving consumer sentiment and behavior towards ad and marketing tech, and increasing bifurcation between those who accept broad uses of their data and those who are more concerned about privacy
  • Rising anxiety about the use and impact of social media in political campaigns, and how social media channels are shaping political and other societal debates
  • Unintended consequences arising from increased use of AI and machine learning, such as bias in algorithms. Concerns about hostile impacts are prompting attempts to control the use of AI in processes like hiring
  • Increased regulation around data collection, retention and use, such as the European General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA), and equivalents in other jurisdictions


The Opportunity


Protection against deliberate hacking and malware is increasingly important. Companies must invest in defending a wider range of touchpoints against well-funded and organized adversaries. Yet as the potential for danger rises, other dimensions of hostile tech also have to be considered. We believe that being respectful of customer wishes, avoiding ‘spooky’ targeting, and rooting out bias within algorithmic systems is not only inherently the right thing to do, but conducive to trust, positive public perceptions, and ultimately the health of the business.


According to IBM, the average global cost of a data breach in 2020 was US$3.86 million. In the first half of 2020 alone, European supervisory authorities issued fines totalling over €50 million for GDPR violations. With consumers placing a higher value on their privacy, robust privacy practices have become a strong differentiator for some companies. A recent survey by McKinsey found a clear majority of consumers will not do business with a company if they have concerns about its security practices, or believe it gives out sensitive data without permission. 

 AI and ML
 AI and ML

What we’ve seen

In a seven-year partnership, we set out to help the UK government transform the way it interacted with and delivered public services to citizens, making trust and security a priority from the very beginning. The project united disparate government websites into a single robust and user-friendly platform, enhancing citizen experience and substantially accelerating deployment cycles. Importantly, the platform was backed by an online identification assurance system that allowed citizens to submit applications for services while meeting all necessary data protection requirements and respecting individuals’ privacy rights. Minimizing the potential for negative outcomes and fostering confidence in the platform encouraged its rapid adoption.

Trends to watch: Top Three




Secure software delivery. Treat delivery pipelines as the high-risk production systems that they are, since by design they are used to deploy software to your production environments. Understand security implications for data in-flight and at rest. Generate audit trails, and learn about and integrate anomaly detection solutions to help detect security incidents. Stay abreast of compliance laws that affect your region and the effect they have on your systems.




Modern AuthZ. A combination of increased cyber-threats and liability, coupled with decentralized microservices architecture, have stretched traditional authorization (AuthZ) solutions. With the boundaries of network perimeter trust fading, authorization based on network location loses its effectiveness. Consider adopting approaches like Zero TrustBeyondCorpBeyondProd, and Vectors of Trust to modernize your AuthZ processes and incorporate a broader spectrum of factors in authorization decisions.



Quantum computing. Quantum computing is a proven concept, but hasn’t scaled and may take a long time to approach maturity. Though the full scope of its potential applications is not yet clear, it bears close watching as there is a threat that the encryption of many systems, and indeed the entire internet, could be easily broken using quantum algorithms such as Shor’s.

Trends to watch: The complete matrix

Technologies that are here today and are being leveraged within the industry
  • Secure software delivery
  • Decentralized security
  • DevSecOps
  • Automated compliance
  • Adaptive, automated security
  • Testing ML algorithms and applications
  • Privacy by Design
Technologies that are beginning to gain traction, depending on industry and use-case
  • Biometrics
  • AI in cybersecurity
  • Smart contracts
  • Facial / Expression Recognition
  • Blockchain Technologies
  • Differential privacy
  • Explainable AI (XAI)
  • AutoML
  • Automated workforce
  • Privacy-respecting computation
  • Decentralized data platforms
  • Zero knowledge proofs
Still lacking in maturity, these technologies could have an impact in a few years
  • “Security forward” businesses
  • Surveillance tech
  • Addictive tech
  • Autonomous drones / drone as a platform
  • Tech for the honest corporation™
  • Sovereignty as a force in cyberspace
  • Increased regulation
  • Code of ethics for software
  • Production immune system
  • Quantum computing
  • Privacy aware communication
  • UX of consumer data privacy and security
  • Ethical frameworks
  • Deep fakes
  • Death of passwords

Advice for adopters


  • Make security ‘everyone’s problem’. Security is fast-moving and can’t be the responsibility of just one person or department; people throughout the organization need to make it a priority. Similarly, you can’t simply buy a security solution, install it, and consider yourself protected. Security considerations should be built into your product lifecycle from idea to production. Combine audits with monitoring so you can proactively discover breaches and respond quickly when they occur.

  • Promote your positive privacy stance. Establish and communicate clear policies, such as promising customer data will never leave a device, as a business differentiator. Make sure these policies are fully understood by your employees and customer base.

  • Capture only the data needed to provide service to your consumers. Capture only the data needed to provide service to your consumers, rather than simply collecting everything possible. Gathering and storing data that is not business-critical only adds to the organization’s technology and compliance burden, and creates a bigger target for hackers or other bad actors.  

  • Create an explicit framework outlining your policies for detecting and avoiding bias in your systems. Create an explicit framework outlining your policies for detecting and avoiding bias in your systems, and promoting ethical technology practices. Brookings Institution’s Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms is an example.

By 2022, businesses will…

… consider a wider range of negative implications than privacy and security breaches when developing and deploying customer-facing systems and products, and understand that robust action to minimize unintended ‘hostile’ outcomes can be a source of competitive advantage. For forward-looking companies, rather than a set of policies, security and ethics will be a practice evident in everything teams do.
Dr. Rebecca Parsons
Chief technology officer, Thoughtworks

Download the full report