This year XConf Europe is back in-person, in three cities, to host our annual technology event created by technologists for technologists.
Join us to hear insightful keynotes from local thought leaders and participate in a robust agenda of talks from Thoughtworks technologists. They’ll share first-hand experiences with emerging technology, insights on the latest trends and how at Thoughtworks, we are making tech better, together.
Breakout sessions and networking will round out the event.
Tickets are available below. Get yours today.
Keynotes & tickets
Join us in Stuttgart where Emily Gorcenski Principal Data Scientist & Head of Data at Thoughtworks will speak about how to get from Data to Decision.
Asim Hussainm, Green Cloud Advocacy Lead at Microsoft, and Chairperson at Green Software Foundation will keynote the Manchester event sharing insights on the carbon score of your software.
Our Madrid event will be led by a keynote from Thoughtworks own Paulo Carilo, principal consultant, author, speaker and facilitor. He’ll discuss the journey from vision to successful teams and products.
Stuttgart: Emily Gorcenski
From Data to Decision
The average Fortune 500 company makes approximately 400 million decisions per day. While many of these are likely to be micro-decisions, a large number are business-critical: Which products should we launch? What markets should we go into and how? What are the right hiring and marketing strategies? If businesses are decision-making engines, data is the fuel. How can we use data to make better, higher quality decisions faster, and at scale?
Manchester: Asim Hussain Manchester
What is the carbon score of your software?
Software is a significant emitter of carbon emissions, and it’s growing fast. However, a piece of software is not a physical thing. So measuring software for carbon emissions in the same way you measure physical things often fails. We need to completely re-think how we measure emissions for software from the ground up. This talk will discuss a methodology for carbon scoring your application called the Software Carbon Intensity (SCI) Specification. The SCI is an effort by the Green Software Foundation to reach a cross-industry consensus on carbon scoring for software, for those in the data center space think of it as “PUE for Software”. We’ll also discuss the three types of actions you can take to improve your score.
Madrid: Paulo Caroli
Don't waste your time, your effort, your career. A journey from the vision to successful teams and products
If you don't know where you're going, any road will take you there.” wrote Lewis Carroll in the book Alice in Wonderland. You don't want to work like Alice in Wonderland. You want to direct your time, your efforts, your career to achieve great success in your business! Let me share with you how to start with the vision, going through OKR, Scrum and Minimum Valuable Product to build products and solutions that your customers love.
How Zalando improved offer creation throughput by x100 using Akka Sharded Cluster
Rohit Sharma and Abdelrahman Barakat
Zalando is a leading online platform for fashion and lifestyle, operating in 23 countries serving over 5,000 brands and growing continuously. This growth hit our systems first in 2018 during our biggest sales event, CyberWeek, when a system in the core platform became a bottleneck in processing Price, Stock and Product detail events hampering customer experience by delayed discounts on the shop. This talk is a story about replacing a low-throughput system with a high-throughput, low-latency system. Utilizing Akka Cluster Sharding, our team increased the system throughput by 20 times from ~200 MiB/s to more than 4GiB/s with 100,000 events per second, supporting Zalando in achieving its CyberWeek targets.
The four key metrics: Unleashed
Listed on the Thoughtworks Technology Radar under “adopt” since 2019, the four key metrics are transforming the face of software development. The idea behind them is really simple — optimize for lead time, deployment frequency, change failure rate and median time to restore — but getting them up and running, and then embedding them within your team and across your organization isn't so simple. Why? To begin with you'll have to answer questions like "when does lead time start and stop?", "which deployments do we count and which do we ignore? (Do infra builds count? What happens if you’re not in production?)" and "what makes a change failure and how do we tell when service is restored?". Then you have to gather all these stats automatically. I’ve rolled out these metrics for multiple companies. I’ll share my lessons learned while solving these problems and also tackle the far more important topic: how you can use all this data to make your teams awesome!
Zero trust CI/CD pipeline
When it comes to enterprise security, zero trust should be the only acceptable solution. As part of this talk, I will be describing the WHAT, WHY and HOW model around zero trust. WHAT is zero trust? WHY is it essential for every enterprise? HOW to achieve it in an automated CI/CD pipeline? The example pipeline will be a GitHub pipeline to deploy infra to GCP using Infrastructure as Code (IAC) — Terraform. Security is no more a choice. It's an essential part of any implementation. This talk will help developers to write security first CI/CD pipelines — an addition to their skillset. This talk will also encourage architects to keep security at the forefront while taking architectural decisions. And last but not the least, the security guys will have interesting take aways to bring back to their organization to start thinking about zero trust for future implementation.
CaD (Code as Data): How data insights on legacy codebases can fill the knowledge gap in complex modernization projects
In complex legacy modernization projects, rebuilding company-wide knowledge about and around business processes is one of the most challenging tasks. Engaging business stakeholders and capturing their actual needs is paramount, but not always enough to get all the underlying complex business logics, and, most of all, assessing the impact of changes.
Explainable Artificial Intelligence and Continuous Delivery: Towards the intelligent organization
Machine Learning has great potential for improving services, products and processes paving the way towards AI-driven intelligent organizations. But the lack of explainability associated with some popular Machine Learning techniques — such as the Deep Learning black-box models — is a serious barrier to the evolution of this AI-driven approach and a major drawback for a more responsible and ethical AI. Explainable Artificial Intelligence (xAI) can be the key to solving these problems and, therefore, improving business adoption, regulatory compliance and patterns of human-machine collaboration. In this talk, we will review the latest achievements in Explainable Artificial Intelligence and explore how to integrate xAI techniques and Domain-Driven Design in a new approach to Continuous Delivery for Machine Learning (CD4ML).
CEOs don’t care about your infra project: Why they should and how to make them
Infrastructure, networking or “backend” teams have been around for a long time in most organizations. Despite the shift to platform or SRE teams with a bunch of promises (“speeding up delivery” or “more stability”), there is often such an unending operational load that means they can’t see those promises through to outcomes. Then management asks the question: why should I invest more in this team? The CTO or head of engineering fights the platform team’s corner but the focus typically ends up somewhere else. Over the past two years, I’ve worked with a bunch of infrastructure teams who were all trying to get more investment and interest in what they were doing. I made a lot of mistakes (some of which I’ll share!) but the big success was changing the perception of what those teams were doing. I’ll share how we made the highly repetitive toil work of those teams visible and shifted the narrative on what value they could really be adding to the whole business using the very simple sounding value-based goals and thin slicing (spoiler: it wasn’t very simple!)
A peek into observability from a tester's lens
The word “observability” is thrown around quite a bit these days. What does this mean? Is it just another new acronym for monitoring? In the current era, organizations are building applications with more complex architectures such as — blockchain, distributed systems and microservices. The job of maintaining these systems and ensuring they are working as expected has become a challenging task. Gone are the days where testers had to rely on the UI’s to validate an application. Now it is all about what happens underneath the hood. I worked on a distributed system where no one had any idea of what was going on and why there were production issues. We had some monitoring and logging in place, but we had no clue where, how and what to look out for whenever there was a problem. Join this session, where I discuss my journey with Observability. I will share how I discovered various insights about my system by using this approach. How I learned this technique and implemented it within my engineering team.
When the data levee breaks: Managing cognitive load in data-intensive projects
Melania Sánchez Blanco and Arne Lapõnin
Modern data-driven organizations have many data pipelines carrying information that enables people to make critical business decisions. Development teams often struggle to balance existing pipelines operational vs fulfilling new product requirements. One of the key challenges is maintaining knowledge about the transformations, input data sources and output data sources. Apart from technological tools, such as data catalogs and data lineage solutions, there are some battle-worn techniques such as tables and visual diagrams that aid communication within a team. This talk will cover our experiences in trying to systematize the knowledge that is needed in a business-critical data engineering project and which techniques and tools we have used to reduce the cognitive load of the team.
Modernization: Taming the legacy and keeping the new house clean
Modernizing a legacy estate is daunting and taking the first step is often the hardest: where do you start and how can you stay on track? Too often we see modernization leading to similar architectures being built, using newer technology but without delivering the intended benefits. To escape the gravitational pull of legacy, we’ve learned that it’s important to have discipline and that taking an opposite approach can help to deliver value early and incrementally. In this talk, I will take you on the journey of one of our clients, where we sliced off a user journey from a large legacy estate, and share how we built the foundation for modern architecture. I will share six guiding principles: three for the approach and three for the architecture, which can also help you to be set up for success.
Building in accessibility for broader value to all
Accessibility is not only the right thing to do to serve the whole population (who will all, at some point in their lives be disabled) — there is a huge opportunity for businesses everywhere. I will make the case for us to push accessibility “left” in the product development/planning/strategy cycle — making sure that it isn’t a ‘nice to have’ or an ‘add on’ or an ‘optional benefit’ — but integral to design. This approach creates space for innovation and opens up products to the whole audience. It also reduces risk, time and cost.
Evolutionary Design Systems
In recent years, Design Systems have become increasingly important to manage design at scale and build cohesive UIs efficiently. In reality, however, many companies still struggle to establish Design Systems which get adopted across their product teams. As a result, designers and engineers end up reinventing the wheel over and over again — while customer experience and time to market take a hit. Why is that the case? Can it be fixed? And if so, how?
Some useful bits: A few solved development problems via Crypto
In this talk, we'll review some interesting problems that come up often in data engineering and software work that is "technically solved" by cryptographic protocols and methods. For example, how do I find out what users I share with another company without sending over the emails? Or how do I calculate a result across three devices who should not see each other's data? And how can I protect users on my platform when they are messaging one another? We'll learn just enough about the underlying protocols to figure out when we can use them and how to leverage them to do our work for us.