This article presents a hypothesis on what the (not too far in the future) world of AI-assisted Software Development will look like. In a line, it’ll read something like this: Concepts governing software creation will stay the same, but the pipeline is going to look incredibly different. At almost every stage, AI will assist humans and make the process more efficient, effective and enjoyable.
Our hypothesis is supported by predictions that, the AI industry’s revenue will reach $1.2 trillion by the end of this year, up 70% from a year ago. Further, AI-derived business value is expected to reach $3.9 trillion by 2022. We have also factored in observations of three main themes over the last decade; compute power, data and sophisticated developer tools.
More compute power: Easy access to elastic compute power, and public clouds have empowered developers, enterprises and tool creators to quickly run heavier analysis workloads, through parallelization. According to IDC, cloud-based infrastructure spends will reach 60% of all IT infrastructure by 2020.
More data: Improved processing power will see digital leaders investing in better collection and utilization of data - 90% of the world’s data was created last year but, utilization is at 1%. It’s slated to grow to 3% or 4% by 2020
The software creation process consists of 3 phases. They can be further split into nine different task categories. Interestingly, only some of these categories have seen more investment in AI-powered tooling than others. In the course of this article, let’s discuss some of the instances where AI will assist technologists in software development by taking over data analysis and prediction capabilities. Such an evolution will permit technologists to have more time to focus on judgment and creativity related tasks that machines can’t take on.
There is an increasing presence for what we call Intelligent Development Tools. We believe this turn of events is because of the three themes, and the growing clout of developers, that have caused dozens of startups to offer developer-focused services such as automated refactoring, testing, and code generation. The evolution of these tools can be compartmentalized into three levels of sophistication.
The levels of sophistication
The first focuses on the automation of manual tasks that increase reliability and efficiency of software creation. For example, the test automation reduced cycle time through parallelizing which shortened feedback loops. The deployment automation improved reliability using repeatable scripts. However, it’s still been humans who analyzed and acted on the feedback.
The next level of sophistication covers tools that permitted machines to take decisions based on fixed rules. Auto-scaling infrastructure is a good example of this. Machines can now determine the required compute power to service loads being handled by an application, while humans configured the bounds and steps that the compute power could scale.
The final level of sophistication will enable machines to evolve without human intervention - analyzing data and learning from it, and empower tools to mutate or augment rules that allow them to take increasingly complex decisions. We wanted to share a few ideas of how AI can augment the software development cycle.
The software development cycle
One of the most common approaches to building AI use cases is leveraging the neural network; a computer system modeled on the human brain and nervous system. The popular approach involves developing a single algorithm that encompasses the intermediate processing steps of multiple neural net layers, leading to direct output from the input data. This process is successful and provides very good results when large samples of labeled data are available. The challenge with this method is that the internal processing of learning is not clearly explainable and sometimes gets difficult to troubleshoot for accuracy.
Ideation augmented: Take the example of an e-commerce website. Here, people analyze data to find where users drop-off during an ordering funnel and come up with ideas to improve conversion. In the future, we could have machines that blend usage analytics with performance data to derive if slow transactions are the cause for drop-offs. Additionally, these machines could also identify faulty code that when fixed, will improve performance.
Testing augmented: Writing tests for legacy systems, even with documentation, is very hard. Automated test creation tools that leverage AI to map out the application’s functionality, using usage and code analytics, allow teams to quickly build a safety net around such legacy systems. This allows technologists to make changes without breaking existing functionality.
Maintenance augmented: A large part of maintenance-related costs today is spent on managing redundant features. Identification of these redundancies is a complex, error-prone process because people have to correlate data with multiple sources. Allowing AI tools to take up this role of connecting and referencing data across sources will automate marking of unessential features and associated code.
Given the nature of evolution in the dynamic software development world, here’s our recommendation for how to prepare and focus efforts -
Recognize and leverage elastic infrastructure which ensures the ability to add and remove resources ‘on the go’ to handle the load variation
Equip your teams to strategically collect and process data, an invaluable asset whose volume will only increase given the prevalence of emerging tech like voice, gesture, etc.
Include a stream within investment strategies that grow AI assisted software creation - rule-based intelligent tools and self-learning tools.