Master
ThoughtWorks
Menü
schließen
  • Unsere Services
    • Übersicht
    • Customer Experience, Produkt und Design
    • Data Strategy, Engineering und Analytics
    • Digitale Transformation und Operations
    • Enterprise Modernization, Plattformen und Cloud
  • Unsere Kunden
    • Übersicht
    • Automobil
    • Gesundheit
    • Öffentlicher Sektor
    • Clientech, Energie und Versorgung
    • Medien
    • Handel und E-Commerce
    • Banken und Versicherungen
    • Non-Profit
    • Reise und Transport
  • Insights
    • Übersicht
    • Unsere Empfehlungen

      • Technologie

        Ausführliche Betrachtungen neuer Technologien.

      • Business

        Aktuelle Business-Insights, Strategien und Impulse für digitale Querdenker.

      • Kultur

        Insights zu Karrieremöglichkeiten und unsere Sicht auf soziale Gerechtigkeit und Inklusivität.

    • Digitale Veröffentlichungen und Tools

      • Technology Radar

        Unser Leitfaden für aktuelle Technologietrends.

      • Perspectives

        Unsere Publikation für digitale Vordenker*innen

      • Digital Fluency Model

        Ein Modell zur Priorisierung digitaler Fähigkeiten, um für das Unvorhersehbare bereit zu sein.

      • Decoder

        Der Technology-Guide für Business Entscheider

    • Alle Insights

      • Artikel

        Expertenwissen für Ihr Unternehmen.

      • Blogs

        Persönliche Perspektiven von ThoughtWorkern aus aller Welt.

      • Bücher

        Stöbern Sie durch unsere umfangreiche Bibliothek.

      • Podcasts

        Spannende Gespräche über das Neueste aus Business und Technologie.

  • Karriere
    • Übersicht
    • Bewerbungsprozess

      Finde heraus, was dich in unserem Bewerbungsprozess erwartet.

    • Hochschulabsovent*innen und Quereinsteiger*innen

      Dein Einstieg in die IT-Welt.

    • Stellenangebote

      Finde offene Stellen in deiner Region.

    • In Kontakt bleiben

      Abonniere unsere monatlichen Updates.

  • Über uns
    • Übersicht
    • Unser Ziel
    • Awards und Auszeichnungen
    • Vielfalt, Gleichberechtigung, Inklusion
    • Management
    • Partnerschaften
    • Neuigkeiten
    • Konferenzen und Events
  • Kontakt
Germany | Deutsch
  • United States United States
    English
  • China China
    中文 | English
  • India India
    English
  • Canada Canada
    English
  • Singapore Singapore
    English
  • United Kingdom United Kingdom
    English
  • Australia Australia
    English
  • Germany Germany
    English | Deutsch
  • Brazil Brazil
    English | Português
  • Spain Spain
    English | Español
  • Global Global
    English
Blogs
Wählen Sie ein Thema
Alle Themen ansehenschließen
Technologie 
Agiles Projektmanagement Cloud Continuous Delivery  Data Science & Engineering Defending the Free Internet Evolutionäre Architekturen Experience Design IoT Sprachen, Tools & Frameworks Modernisierung bestehender Alt-Systeme Machine Learning & Artificial Intelligence Microservices Plattformen Sicherheit Software Testing Technologiestrategie 
Geschäft 
Financial Services Global Health Innovation Retail  Transformation 
Karriere 
Karriere Hacks Diversity und Inclusion Social Change 
Blogs

Themen

Thema auswählen
  • Technologie
    Technologie
  • Technologie Überblick
  • Agiles Projektmanagement
  • Cloud
  • Continuous Delivery
  • Data Science & Engineering
  • Defending the Free Internet
  • Evolutionäre Architekturen
  • Experience Design
  • IoT
  • Sprachen, Tools & Frameworks
  • Modernisierung bestehender Alt-Systeme
  • Machine Learning & Artificial Intelligence
  • Microservices
  • Plattformen
  • Sicherheit
  • Software Testing
  • Technologiestrategie
  • Geschäft
    Geschäft
  • Geschäft Überblick
  • Financial Services
  • Global Health
  • Innovation
  • Retail
  • Transformation
  • Karriere
    Karriere
  • Karriere Überblick
  • Karriere Hacks
  • Diversity und Inclusion
  • Social Change
Software TestingTechnologie

DSLs for functional testing

Chad Wathington Chad Wathington

Published: Jun 25, 2010

Most software professionals believe that testing software is essential to quality. Where people inside the industry differ is how to accomplish that testing, as strategies vary by level of the application tested, tools, methodology, amount of automation, and who completes the testing itself. The aspirational desire is to ship high-quality bug-free software no matter how one gets there. However, the devil is in the details.

The promise of quality software

Central to creating a testing strategy that works are four essential questions:

  • How can tests be executed early and often enough to mitigate risk
  • How can tests be maintained as long-lived, evolving, and reusable assets
  • How can testing involve all the stakeholders in the software development process such that requirements are fully understood and tested?
  • How can the highest value most essential testing be identified?

One solution to the risk question is automation. With an automated test suite, an application can be tested as frequently and rapidly as your hardware constraints allow, hypothetically. And, if the automation is started as the application evolves, then the test suite grows with the system under test (SUT). However, to date, automation has created additional problems, particularly in addressing the asset and stakeholder questions.

Automated tests are often very difficult to maintain as an asset because they can be brittle. As an application changes over time between releases or during the development cycle, the tests must change accordingly. The tool support for changing automated tests frequently and uniformly over time is lacking in commercial tools. Consequently when tests break because of application changes they are difficult to repair. Moreover, if automated tests at the GUI level are created by the typical record and playback mechanisms, they need to be re-recorded, especially if the GUI is changing substantially.

In terms of stakeholder inclusion, automated testing via traditional means typically falls short as well. Automated tests have a representation problem - tests are generally represented in ways that make sense to computers, not to subject matter experts or business analysts. In some cases, they have no useful representation or specification for non-technical users at all. Some commercial tool vendors try to address this problem by aiming their wares at business users, claiming to have wizard driven "no code required" solutions. These tools suffer from the same issues that many rules engines do - they are not intuitive for technical users, who also need the tests to troubleshoot, and business users often need substantial help to use them effectively.

While holding a lot of promise, automated testing isn't currently delivering the value possible because it makes the asset and stakeholder questions more difficult to answer. However, over the life of a software application an automated test suite at the functional level can:

  • Reduce the cost and time of regression testing
  • Increase quality by freeing testers to focus on value added activities
  • Reduce the defects that make it to production by providing frequent regression checks
  • Make requirements clearer by giving developers programmatic and strictly defined acceptance tests
  • Finding a way to maintain automated functional tests easily over time, while involving all levels of stakeholders, is key to the future of functional testing.

From brittleness to refactoring

Over the last several years, software developers have used a discipline called refactoring to help improve code design and structure over time. Refactoring allows developers to make changes to software while minimizing the likelihood of introducing errors. Essentially, when developers refactor, they change the underlying structure of a piece of code without changing its external behavior. In other words, the intent of the code, or how it works does not change, but the underlying implementation changes.

Refactoring has some interesting implications for automated functional testing. When an automated test changes, often the domain concepts involved do not. If the domain concepts remain the same, the intent of a test remains as well. For example, in a web personal banking application, we want to test that funds were transferred via the website. Although the GUI may change around the funds transfer activity, the fundamental steps remain the same. Testing intent is ultimately the external behavior of a test. In the same way that refactoring helps developers manage changes while maintaining intent, refactoring can help testers manage automated tests; after all, automated tests are code. At a high level, applying the same refactoring concepts to testing intent allows testers to substantially reduce the brittleness of automated testing.

For example, let's assume we have an application where we change the login procedure from a standard username and password system to OpenID. The domain concept of login remains the same, just how we do it changes. If the automated tests were created via record and playback or simple scripting, we would have to fix a substantial portion of the test suite to accommodate this change, assuming that a user needs to login to use the system. However, we can easily refactor our suite and make changes in one place if our test code is well structured and we have the right tools. The trick in creating a test suite that is easily refactorable and changeable modularly is tool support. Refactoring was a niche process until Integrated Development Environments (IDEs) gave developers convenient automated mechanisms to refactor. Most current commercial testing tools barely support a robust search and replace, not to mention refactoring. Refactoring support, and specifically refactoring designed for automated testing, is fundamental to fixing brittleness.

Domain Specific Languages (DSLs)

"I believe that the hardest part of software projects, the most common source of project failure, is communication with the customers and users of that software. By providing a clear yet precise language to deal with domains, a DSL can help improve this communication." - Martin Fowler, Domain Specific Languages (working title)

Involving all the stakeholders in the testing process requires that subject matter experts, customers, testers, and developers communicate consistently and effectively. In the past, many software project teams relied on extensive requirements documentation to communicate effectively across these groups. However, requirements in this context require a fair amount of translation. Subject Matter Experts (SMEs) and analysts must translate customer desires and needs into requirements. Developers must translate these requirements into code, and testers must independently translate the same requirements into tests. Finally, each one of these translations must match the customers' original expectations, which may change overtime. It's similar to the telephone game that children play in primary school, only that the starting phrase is a moving target.

We could affect a substantial improvement to the translation problem if the customers could express and test their own requirements, or acceptance criteria, themselves. At a minimum, customers should be able to simply review any translations that occur in the process and verify that translations match their needs.

One means to ensure that code across a project team is readable by everyone is a DSL. Domain specific languages are mini-computer languages that are designed around a very specific purpose or domain. They are not general programming languages like Java, but are tailored toward a narrow task. Depending on the implementation, they can also closely resemble written human language. Applied to automated functional testing, creating a DSL to express testing intent solves several problems.

A DSL for an SUT allows a team to standardize on a particular vocabulary to describe the domain. If a standard vocabulary is already in use, then using a DSL to express acceptance criteria minimizes translation errors. Furthermore, DSLs allow non-technical stakeholders to interact with the testing effort on more level terms. A properly designed DSL will at a minimum allow these stakeholders to read tests. Under the right circumstances, and with the appropriate skill level, it's possible for domain experts and non-technical users to write tests as well.

Tool Support

Conceptually, we can address much of the problems with automated testing by creating DSLs that fit application domains and refactoring our tests as the application changes. In practice, this is easier said than done. Many refactorings are impractical to perform manually, and creating a DSL requires a fair amount of developer time.

However, it is possible to create an integrated testing environment, or an IDE for testers, that makes testing oriented refactoring possible. It is also possible to make the DSL creation process simpler along a few parameters. In particular, if testers had enough tool support to create their own DSLs without substantive help from developers, this could make testing a team level more accessible.

From Tool Support to Collaboration

Beyond sitting in a cubical and writing a DSL individually or refactoring ones' own tests, the best way to deal with the increasingly complex task of testing modern software is to make it a group effort. Fundamentally, to deal with ambiguity, uncertainty, and translation errors, testing should encompass perspectives from the entire team.

Taking the team involvement a step further, collaboration helps answer the question of what testing is essential. Since determining the essential points for your testing strategy isn't necessarily a straightforward process, frequent information sharing within a team makes these issues clearer. For instance, the choke points of an SUT today may not be the same tomorrow. The parts of the application that seem most complicated to the end users, SMEs, analysts, or customers may be trivial for the developers. Similarly, pieces of the application with lower risk to the business may have higher technical risk. When testing isn't just a function of the QA department, the differing perspectives on what is essential add to a comprehensive picture of the application.

The best way to facilitate this collaboration is to build it right into the testing tool support itself. Thus, when the essential parts of the SUT change, everyone on the team can readily see the changes and adjust the testing strategy accordingly.

Announcing Twist

ThoughtWorks has released a testing tool, Twist that addresses many of the issues discussed above. It provides the facility to create DSLs, refactor, and collaborate.

Master
Datenschutz | Modern Slavery statement | Barrierefreies Webdesign
Connect with us
×

WeChat

QR code to ThoughtWorks China WeChat subscription account
© 2021 ThoughtWorks, Inc.