menu

ADOPT?

  • We’ve decided to bring consumer-driven contract testing back from the archive for this edition even though we had allowed it to fade in the past. The concept isn’t new, but with the mainstream acceptance of microservices, we need to remind people that consumer-driven contracts are an essential part of a mature microservice testing portfolio, enabling independent service deployments. But in addition, we want to point out that consumer-driven contract testing is a technique and an attitude that requires no special tool to implement. We love frameworks like Pact because they make proper contract tests easier to implement in certain contexts. But we have noticed a tendency for teams to focus on the framework rather than on the general practice. Writing Pact tests is not a guarantee that you are creating consumer-driven contracts; likewise, in many situations you should be creating good consumer-driven contracts even where no pre-built testing tool exists.

  • Teams are pushing for automation across their environments, including their development infrastructure. Pipelines as code is defining the deployment pipeline through code instead of configuring a running CI/CD tool. LambdaCD, Drone, GoCD and Concourse are examples that allow usage of this technique. Also, configuration automation tools for CI/CD systems like GoMatic can be used to treat the deployment pipeline as code—versioned and tested.

  • With the number of high-profile security breaches in the past months, software development teams no longer need convincing that they must place an emphasis on writing secure software and dealing with their users' data in a responsible way. The teams face a steep learning curve, though, and the vast number of potential threats—ranging from organized crime and government spying to teenagers who attack systems "for the lulz"—can be overwhelming. Threat Modeling provides a set of techniques that help you identify and classify potential threats early in the development process. It is important to understand that it is only part of a strategy to stay ahead of threats. When used in conjunction with techniques such as establishing cross-functional security requirements to address common risks in the technologies a project uses and using automated security scanners, threat modeling can be a powerful asset.

TRIAL?

  • Businesses have wholeheartedly embraced APIs as a way to expose business capabilities to both external and internal developers. APIs promise the ability to experiment quickly with new business ideas by recombining core capabilities. But what differentiates an API from an ordinary enterprise integration service? One difference lies in treating APIs as a product, even when the consumer is an internal system. Teams that build APIs should understand the needs of their customers and make the product compelling to them. Products are also improved, maintained and supported over the long term. They should have an owner who advocates for the customer and strives for continual improvement. Products are actively maintained and supported, easy to find and easy to use. In our experience, a product orientation is the missing ingredient that makes the difference between ordinary enterprise integration and an agile business built on a platform of APIs.

  • The use of bug bounties continues to grow in popularity for many organizations, including enterprises and notable government bodies. A bug-bounty program encourages participants to identify potentially damaging vulnerabilities in return for reward or recognition. Companies like HackerOne and Bugcrowd offer services to help organizations manage this process more easily, and we're seeing these services gather adoption.

  • A Data Lake is an immutable data store of largely unprocessed "raw" data, acting as a source for data analytics. While the technique can clearly be misused, we have used it successfully at clients, hence motivating its move to trial. We continue to recommend other approaches for operational collaborations, limiting the use of the data lake to reporting, analytics and feeding data into data marts.

  • In a number of countries, we see government agencies seeking broad access to private, personally identifiable information (PII). The increased use of public cloud solutions makes it more difficult for organizations to protect the data entrusted to them by their users while also respecting all relevant laws. The European Union has some of the most progressive privacy laws, and all the major cloud providers—Amazon, Google and Microsoft—offer multiple data centers and regions within the European Union. Therefore, we recommend that companies, especially those with a global user base, assess the feasibility of a safe haven for their users' data by hosting PII data in the EU. Since we wrote about this technique in the last Radar, we have rolled out a new internal system that handles sensitive information relating to all our employees, and we have chosen to host it in a data center located in the European Union.

  • Although much documentation can be replaced with highly readable code and tests, in a world of evolutionary architecture it's important to record certain design decisions for the benefit of future team members and for external oversight. Lightweight Architecture Decision Records is a technique for capturing important architectural decisions along with their context and consequences. Although these items are often stored in a wiki or collaboration tool, we generally prefer storing them in source control with simple markup.

  • We see continued adoption and success of reactive architectures, with reactive language extensions and reactive frameworks being very popular (we added several such blips in this edition of the Radar). User interfaces, in particular, benefit greatly from a reactive style of programming. Our caveats last time still hold true: Architectures based on asynchronous message passing introduce complexity and make the overall system harder to understand—it's no longer possible to simply read the program code and understand what the system does. We recommend assessing the performance and scalability needs of your system before committing to this architectural style.

  • Serverless architecture is an approach that replaces long-running virtual machines with ephemeral compute power that comes into existence on request and disappears immediately after use. Since the last Radar, we have had several teams put applications into production using a "serverless" style. Our teams like the approach, it’s working well for them and we consider it a valid architectural choice. Note that serverless doesn’t have to be an all-or-nothing approach: some of our teams have deployed a new chunk of their systems using serverless while sticking to a traditional architectural approach for other pieces.

ASSESS?

  • Although many problems that people encounter with RESTful approaches to APIs can be attributed to the anemic REST antipattern, some use cases warrant exploration of other approaches. In particular, organizations that have to support a long tail of client applications (and thus a likely proliferation of API versions even if they employ consumer-driven contracts)—and have a large portion of their APIs supporting the endless-list style of activity feeds—may hit some limits in RESTful architectures. These can sometimes be mitigated by employing the client-directed query approach to client-server interaction. We see this approach being successfully used in both GraphQL and Falcor, where clients have more control over both the contents and the granularity of the data returned to them. This does put more responsibility onto the service layer and can still lead to tight coupling to the underlying data model, but the benefits may be worth exploring if well-modeled RESTful APIs aren’t working for you.

  • The container revolution instigated by Docker has massively reduced the friction in moving applications between environments but at the same time has blown a rather large hole in the traditional controls over what can go to production. The technique of container security scanning is a necessary response to this threat vector. Docker now provides its own security scanning tools, as does CoreOS, and we’ve also had success with the CIS Security Benchmarks. Whichever approach you take, we believe the topic of automated container security validation is of high value and a necessary part of PaaS thinking.

  • We are finding Content Security Policies to be a helpful addition to our security toolkit when dealing with websites that pull assets from mixed contexts. The policy defines a set of rules about where assets can come from (and whether to allow inline script tags). The browser then refuses to load or execute JavaScript, CSS or images that violate those rules. When used in conjunction with good practices, such as output encoding, it provides good mitigation for XSS attacks. Interestingly, the optional endpoint for posting JSON reports of violations is how Twitter discovered that ISPs were injecting HTML or JavaScript into their pages.

  • It has long been known that "anonymized" bulk data sets can reveal information about individuals, especially when multiple data sets are cross-referenced together. With increasing concern over personal privacy, some companies—including Apple and Google—are turning to differential privacy techniques in order to improve individual privacy while retaining the ability to perform useful analytics on large numbers of users. Differential privacy is a cryptographic technique that attempts to maximize the accuracy of statistical queries from a database while minimizing the chances of identifying its records. These results can be achieved by introducing a low amount of "noise" to the data, but it’s important to note that this is an ongoing research area. Apple has announced plans to incorporate differential privacy into its products—and we wholeheartedly applaud its commitment to customers' privacy—but the usual Apple secrecy has left some security experts scratching their heads. We continue to recommend Datensparsamkeit as an alternative approach: simply storing the minimum data you actually need will achieve better privacy results in most cases.

  • We've seen significant benefit from introducing microservice architectures, which have allowed teams to scale delivery of independently deployed and maintained services. However, teams have often struggled to avoid the creation of front-end monoliths—large and sprawling browser applications that are as difficult to maintain and evolve as the monolithic server-side applications we've abandoned. We're seeing an approach emerge that our teams call micro frontends. In this approach, a web application is broken up by its pages and features, with each feature being owned end-to-end by a single team. Multiple techniques exist to bring the application features—some old and some new—together as a cohesive user experience, but the goal remains to allow each feature to be developed, tested and deployed independently from others. The BFF - backend for frontends approach works well here, with each team developing a BFF to support its set of application features.

  • As more development teams incorporate security earlier in the development life cycle, figuring out requirements to limit security risks can seem like a daunting task. Few people have the extensive technical knowledge needed to identify all the risks that an application might face, and teams might struggle just trying to decide where to begin. Relying on frameworks such as OWASP's ASVS (Application Security Verification Standard) can help make this easier. Although somewhat lengthy, it contains a thorough list of requirements categorized by functions such as authentication, access control, and error handling and logging, which can be reviewed as needed. It is also helpful as a resource for testers when it comes time to verify software.

  • With the continued rise to domination of the container model led by Docker adoption, we think it's worth calling attention to the continued rapid development in the Unikernel space. Unikernels are single-purpose library operating systems that can be compiled down from high-level languages to run directly on the hypervisors used by commodity cloud platforms. They promise a number of advantages over containers, not least their superfast startup time and very small attack surface area. Many are still at the research-project phase—Drawbridge from Microsoft Research, MirageOS and HaLVM amongst others—but we think the ideas are very interesting and combine nicely with the technique of serverless architecture.

  • The idea of virtual reality has been around for more than 50 years, and with successive improvements of computing technology many ideas have been hyped and explored. We believe that we're reaching a tipping point now. Modern graphics cards provide sufficient compute power to render detailed, realistic scenes in high resolutions, and at the same time at least two consumer-oriented VR headsets (the HTC Vive and Facebook's Oculus Rift) are coming to market. These headsets are affordable, they have high-resolution displays, and they eliminate perceivable motion-tracking lag, which was causing issues such as headaches and nausea before. The headsets are mainly targeted at enthusiast video gaming, but we are convinced that they will open many possibilities for VR beyond gaming, particularly as the low-fi approaches, such as Google Cardboard, are driving greater awareness.

HOLD?

  • There might be the impression that it's easier to manage a single CI (Continuous Integration) instance for all teams because it gives them a single configuration and monitoring point. But a bloated instance that is shared by every team in an organization can cause a lot of damage. We have found that problems like build timeouts, configuration conflicts and gigantic build queues appear more frequently. Having this single point of failure can interrupt the work of many teams. Carefully consider the trade-off between these pitfalls and having a single point of configuration. In organizations with multiple teams, we recommend having CI instances distributed by teams, with enterprise decisions based not on the single CI installation but on defining guidelines about the instances' selection and configuration.

  • With the increasing popularity of the BFF - Backend for frontends pattern and use of one-way data-binding frameworks like React.js, we’ve noticed a backlash against REST-style architectures. Critics accuse REST of causing chatty, inefficient interactions among systems and failing to adapt as client needs evolve. They offer frameworks such as GraphQL or Falcor as alternative data-fetch mechanisms that let the client specify the format of the data returned. But in our experience, it isn’t REST that causes these problems. Rather, they stem from a failure to properly model the domain as a set of resources. Naively developing services that simply expose static, hierarchical data models via templated URLs result in an anemic REST implementation. In a richly modeled domain, REST should enable more than simple repetitive data fetching. In a fully evolved RESTful architecture, business events and abstract concepts are also modeled as resources, and the implementation should make effective use of hypertext, link relations and media types to maximize decoupling between services. This antipattern is closely related to the Anemic Domain Model pattern and results in services that rank low in Richardson Maturity Model. We have more advice for designing effective REST APIs in our Insights article.
  • We continue to see organizations chasing "cool" technologies, taking on unnecessary complexity and risk when a simpler choice would be better. One particular theme is using distributed, Big Data systems for relatively small data sets. This behavior prompts us to put Big Data envy on hold once more, with some additional data points from our recent experience. The Apache Cassandra database promises massive scalability on commodity hardware, but we have seen teams overwhelmed by its architectural and operational complexity. Unless you have data volumes that require a 100+ node cluster, we recommend against using Cassandra. The operational team you’ll need to keep the thing running just isn’t worth it. While creating this edition of the Radar, we discussed several new database technologies, many offering "10x" performance improvements over existing systems. We’re always skeptical until new technology—especially something as critical as a database—has been properly proven. Jepsen provides analysis of database performance under difficult conditions and has found numerous bugs in various NoSQL databases. We recommend maintaining a healthy dose of skepticism and keeping an eye on sites such as Jepsen when you evaluate database tech.

  • As more organizations are choosing to deploy applications in the cloud, we're regularly finding IT groups that are wastefully trying to replicate their existing data center management and security approaches in the cloud. This often comes in the form of firewalls, load balancers, network proxies, access control, security appliances and services that are extended into the cloud with minimal rethinking. We've seen organizations build their own orchestration APIs in front of the cloud providers to constrain the services that can be utilized by teams. In most cases these layers serve only to cripple the capability, taking away most of the intended benefits of moving to the cloud. In this edition of the Radar, we've chosen to rehighlight cloud lift and shift as a technique to avoid. Organizations should instead look more deeply at the intent of their existing security and operational controls, and look for alternative controls that work in the cloud without creating unnecessary constraints. Many of those controls will already exist for mature cloud providers, and teams that adopt the cloud can use native APIs for self-serve provisioning and operations.

Unable to find something you expected to see? Your item may have been on a previous radar

Learn more about our approach to assessing technology & build your own radar Get StartedNo thanks