menu

ADOPT?

  • Teams are pushing for automation across their environments, including their development infrastructure. Pipelines as code is defining the deployment pipeline through code instead of configuring a running CI/CD tool. LambdaCD, Drone, GoCD and Concourse are examples that allow usage of this technique. Also, configuration automation tools for CI/CD systems like GoMatic can be used to treat the deployment pipeline as code—versioned and tested.

TRIAL?

  • Companies have wholeheartedly embraced APIs as a way to expose business capabilities to both external and internal developers. APIs promise the ability to experiment quickly with new business ideas by recombining core capabilities. But what differentiates an API from an ordinary enterprise integration service? One difference lies in treating APIs as a product, even when the consumer is an internal system or fellow developer. Teams that build APIs should understand the needs of their customers and make the product compelling to them. Usability testing and UX research can lead to a better design and understanding of the API usage patterns and help bring a product mindset to APIs. APIs, like products, should be actively maintained and supported, and, easy to use. They should have an owner who advocates for the customer and strives for continual improvement. In our experience, product orientation is the missing ingredient that makes the difference between ordinary enterprise integration and an agile business built on a platform of APIs.

  • In previous Radars issues we mentioned tools such as git-crypt and Blackbox that allow us to keep secrets safe inside the source code. Decoupling secret management from source code is our way to remind technologists that there are other options for storing secrets. For example, HashiCorp vault, CI servers and configuration management tools provide mechanisms for storing secrets that are not linked to the source code of an application. Both approaches are viable and we recommend you use at least one of them in your projects.

  • In a number of countries, we see government agencies seeking broad access to private, personally identifiable information (PII). The increased use of public cloud solutions makes it more difficult for organizations to protect the data entrusted to them by their users while also respecting all relevant laws. The European Union has some of the most progressive privacy laws, and all the major cloud providers—Amazon, Google and Microsoft—offer multiple data centers and regions within the European Union. Therefore, we recommend that companies, especially those with a global user base, assess the feasibility of a safe haven for their users' data by hosting PII data in the EU. Since we wrote about this technique in the last Radar, we have rolled out a new internal system that handles sensitive information relating to all our employees, and we have chosen to host it in a data center located in the European Union.

  • Working with legacy code, especially large monoliths, is one of the most unsatisfying, high-friction experiences for developers. Although we caution against extending and actively maintaining legacy monoliths, they continue to be dependencies in our environments, and developers often underestimate the cost and time required to develop against these dependencies. To help reduce the friction, developers have used virtualized machine images or container images with Docker containers to create immutable images of legacy systems and their configurations. The intent is to contain the legacy in a box for developers to run locally and remove the need for rebuilding, reconfiguring or sharing environments. In an ideal scenario, teams that own legacy systems generate the corresponding boxed legacy images through their build pipelines, and developers can then run and orchestrate these images in their allocated sandbox more reliably. Although this approach has reduced the overall time spent by each developer, it has had limited success when the teams owning the downstream dependencies have been reluctant to create container images for others to use.

  • Although much documentation can be replaced with highly readable code and tests, in a world of evolutionary architecture it's important to record certain design decisions for the benefit of future team members and for external oversight. Lightweight Architecture Decision Records is a technique for capturing important architectural decisions along with their context and consequences. Although these items are often stored in a wiki or collaboration tool, we generally prefer storing them in source control with simple markup.

  • The increase in Progressive Web Applications (PWAs) is the latest attempt to bring back the mobile web in response to users' "app fatigue". Originally proposed by Google in 2015, PWAs are web applications that take advantage of the latest technologies to combine the best of web and native mobile applications. Using a set of open standard technologies such as, service workers, the app manifest, and cache and push APIs, we can create applications that are platform independent and deliver app-like user experiences. This brings parity to web and native applications and helps mobile developers reach users beyond the walled garden of the app stores. Think of PWAs as websites that act and feel like native apps.

  • The combined use of InVision and Sketch has changed the way some people approach web application development. Although these are tools, it is really the technique of prototyping with InVision and Sketch that makes this blip significant. Creating rich, clickable prototypes as the starting point for implementing front-end and back-end behavior helps speed up the development and eliminates churn in the implementation details. This combined use of these tools strikes the right balance between premature elaboration of visual detail and capturing early user feedback on the interactive experience.

  • A serverless architecture approach replaces long-running virtual machines with ephemeral compute power that comes into existence on request and disappears immediately after use. Our teams like the serverless approach; it's working well for us and we consider it a valid architectural choice. Note that serverless doesn't have to be an all-or-nothing approach: some of our teams have deployed a new chunk of their systems using serverless while sticking to a traditional architectural approach for other pieces. Although AWS Lambda is almost synonymous with serverless, the other major cloud providers all have similar offerings, and we also recommend assessing niche players such as webtask.

ASSESS?

  • Although many problems that people encounter with RESTful approaches to APIs can be attributed to the anemic REST antipattern, some use cases warrant exploration of other approaches. In particular, organizations that have to support a long tail of client applications (and thus a likely proliferation of API versions even if they employ consumer-driven contracts)—and have a large portion of their APIs supporting the endless-list style of activity feeds—may hit some limits in RESTful architectures. These can sometimes be mitigated by employing the client-directed query approach to client-server interaction. We see this approach being successfully used in both GraphQL and Falcor, where clients have more control over both the contents and the granularity of the data returned to them. This does put more responsibility onto the service layer and can still lead to tight coupling to the underlying data model, but the benefits may be worth exploring if well-modeled RESTful APIs aren't working for you.

  • The container revolution instigated by Docker has massively reduced the friction in moving applications between environments but at the same time has blown a rather large hole in the traditional controls over what can go to production. The technique of container security scanning is a necessary response to this threat vector. Docker now provides its own security scanning tools, as does CoreOS, and we've also had success with the CIS Security Benchmarks. Whichever approach you take, we believe the topic of automated container security validation is of high value and a necessary part of PaaS thinking.

  • Technologies such as Amazon Alexa, Google Voice and Siri have dramatically lowered the bar for voice-based interaction with software. However, a more conversational style of input (voice or text) can be hard to build on top of many existing APIs, especially when it comes to a more stateful style of interaction where a follow-up interaction needs to be aware of the overall conversational context. In this style of interaction, for example, we'd like to inquire about trains from Manchester to Glasgow and then being able to ask "What time is the first departure?" without having to give the context of the conversation again. Normally this context would be present in the initial response we send back to a browser, but in the case of voice interfaces we need to handle this context elsewhere. Conversationally aware APIs can be an example of the backend for front-end pattern where the front-end is a voice or chat platform. This type of API can handle the specifics of this style of interaction by managing conversation states while calling underlying services on behalf of the voice front-end.

  • It has long been known that "anonymized" bulk data sets can reveal information about individuals, especially when multiple data sets are cross-referenced together. With increasing concern over personal privacy, some companies—including Apple and Google—are turning to differential privacy techniques in order to improve individual privacy while retaining the ability to perform useful analytics on large numbers of users. Differential privacy is a cryptographic technique that attempts to maximize the accuracy of statistical queries from a database while minimizing the chances of identifying its records. These results can be achieved by introducing a low amount of "noise" to the data, but it's important to note that this is an ongoing research area. Apple has announced plans to incorporate differential privacy into its products—and we wholeheartedly applaud its commitment to customers' privacy—but the usual Apple secrecy has left some security experts scratching their heads. We continue to recommend Datensparsamkeit as an alternative approach: simply storing the minimum data you actually need will achieve better privacy results in most cases.

  • We've seen significant benefit from introducing microservice architectures, which have allowed teams to scale delivery of independently deployed and maintained services. However, teams have often struggled to avoid the creation of front-end monoliths—large and sprawling browser applications that are as difficult to maintain and evolve as the monolithic server-side applications we've abandoned. We're seeing an approach emerge that our teams call micro frontends. In this approach, a web application is broken up by its pages and features, with each feature being owned end-to-end by a single team. Multiple techniques exist to bring the application features—some old and some new—together as a cohesive user experience, but the goal remains to allow each feature to be developed, tested and deployed independently from others. The BFF - backend for frontends approach works well here, with each team developing a BFF to support its set of application features.

  • The adoption of cloud and DevOps, while increasing the productivity of teams who can now move more quickly with reduced dependency on centralized operations teams and infrastructure, also has constrained teams who lack the skills to self-manage a full application and operations stack. Some organizations have tackled this challenge by creating platform engineering product teams. These teams operate an internal platform which enables delivery teams to self-service deploy and operate systems with reduced lead time and stack complexity. The emphasis here is on API-driven self-service and supporting tools, with delivery teams still responsible for supporting what they deploy onto the platform. Organizations that consider establishing such a platform team should be very cautious not to accidentally create a separate DevOps team, nor should they simply relabel their existing hosting and operations structure as a platform.

  • Social code analysis enriches our understanding of the code quality by overlaying a developer's behavior with the structural analysis of the code. It uses data from version control systems, such as frequency and time of the change as well as the person making the change. You can choose to write your own scripts to analyze such data or use tools such as CodeScene. CodeScene can help you gain a better understanding of your software systems by identifying hotspots and complex, hard-to-maintain subsystems, coupling between distributed subsystems through temporal coupling, as well as the view of Conway's law in your organization. We believe that with technology trends such as distributed systems, microservices and distributed teams the social dimension of our code is vital in our holistic understanding of our systems' health.

  • The idea of virtual reality has been around for more than 50 years, and with successive advancements in computing technology many ideas have been hyped and explored. We believe that we've reached a tipping point. Reasonably affordable consumer-oriented VR headsets were shipped to the market last year, and modern graphics cards provide sufficient power to create immersive experiences with them. The headsets are mainly targeted at video game enthusiasts, but we're convinced that they'll open the doors to many possibilities for VR beyond gaming. Teams without experience in building video games should not underestimate the effort and skill required to create good 3-D models and convincing textures.

HOLD?

  • We're compelled to caution, again, against creating a single CI instance for all teams. While it's a nice idea in theory to consolidate and centralize Continuous Integration (CI) infrastructure, in reality we do not see enough maturity in the tools and products in this space to achieve the desired outcome. Software delivery teams which must use the centralized CI offering regularly have long delays depending on a central team to perform minor configuration tasks, or to troubleshoot problems in the shared infrastructure and tooling. At this stage, we continue to recommend that organizations limit their centralized investment to establishing patterns, guidelines and support for delivery teams to operate their own CI infrastructure.

  • With the increasing popularity of the BFF - Backend for frontends pattern and use of one-way data-binding frameworks like React.js, we've noticed a backlash against REST-style architectures. Critics accuse REST of causing chatty, inefficient interactions among systems and failing to adapt as client needs evolve. They offer frameworks such as GraphQL or Falcor as alternative data-fetch mechanisms that let the client specify the format of the data returned. But in our experience, it isn't REST that causes these problems. Rather, they stem from a failure to properly model the domain as a set of resources. Naively developing services that simply expose static, hierarchical data models via templated URLs result in an anemic REST implementation. In a richly modeled domain, REST should enable more than simple repetitive data fetching. In a fully evolved RESTful architecture, business events and abstract concepts are also modeled as resources, and the implementation should make effective use of hypertext, link relations and media types to maximize decoupling between services. This antipattern is closely related to the Anemic Domain Model pattern and results in services that rank low in Richardson Maturity Model. We have more advice for designing effective REST APIs in our Insights article.

  • We continue to see organizations chasing "cool" technologies, taking on unnecessary complexity and risk when a simpler choice would be better. One particular theme is using distributed, Big Data systems for relatively small data sets. This behavior prompts us to put Big Data envy on hold once more, with some additional data points from our recent experience. The Apache Cassandra database promises massive scalability on commodity hardware, but we have seen teams overwhelmed by its architectural and operational complexity. Unless you have data volumes that require a 100+ node cluster, we recommend against using Cassandra. The operational team you'll need to keep the thing running just isn't worth it. While creating this edition of the Radar, we discussed several new database technologies, many offering "10x" performance improvements over existing systems. We're always skeptical until new technology—especially something as critical as a database—has been properly proven. Jepsen provides analysis of database performance under difficult conditions and has found numerous bugs in various NoSQL databases. We recommend maintaining a healthy dose of skepticism and keeping an eye on sites such as Jepsen when you evaluate database tech.

  • We've long been advocates of continuous integration (CI), and we were pioneers in building CI server programs to automatically build projects on check-ins. Used well, these programs run as a daemon process on a shared project mainline that developers commit to daily. The CI server builds the project and runs comprehensive tests to ensure the whole software system is integrated and is in an always-releasable state, thus satisfying the principles of continuous delivery. Sadly, many developers simply set up a CI server and falsely assume they are "doing CI" when in reality they miss out on all the benefits. Common failure modes include: running CI against a shared mainline but with infrequent commits, so integration isn't really continuous; running a build with poor test coverage; allowing the build to stay red for long periods; or running CI against feature branches which results in continuous isolation. The ensuing "CI theatre" might make people feel good, but would fail any credible CI certification test.

  • When the enterprise-wide quarterly or monthly releases were considered best practice, it was necessary to maintain a complete environment for performing testing cycles prior to deployment to production. These enterprise-wide integration test environments (often referred to as SIT or Staging) are a common bottleneck for continuous delivery today. The environments themselves are fragile and expensive to maintain, often with components that need manual configuration by a separate environment management team. Testing in the staging environment provides unreliable and slow feedback, and testing effort is duplicated with what can be performed on components in isolation. We recommend that organizations incrementally create an independent path to production for key components. Important techniques include contract testing, decoupling deployment from release, focus on mean time to recovery and testing in production.

  • Back in the days when SOAP held sway in the enterprise software industry, the practice of generating client code from WSDL specs was an accepted—even encouraged—practice. Unfortunately, the resulting code was often complex, untestable, difficult to modify and frequently didn't work across implementation platforms. With the advent of REST, we found it better to evolve API clients that use the tolerant reader pattern for extracting and processing only the fields needed. Recently we have observed a disturbing return to old habits with developers generating code from API specifications written in Swagger or RAML—a practice that we refer to as spec-based codegen. Although such tools are very useful for driving the design of APIs and for extracting documentation, we caution against the tempting shortcut of simply generating client code directly from these specifications. The chances are that such code will be difficult to test and maintain.

Unable to find something you expected to see? Your item may have been on a previous radar

Learn more about our approach to assessing technology & build your own radar Get StartedNo thanks