Advances in computing power, the volume of data available and the speed and falling cost of data analysis have propelled artificial intelligence (AI) to the next paradigm shift in technology. Alison Porter, Richard Clode and Graeme Clark, portfolio managers in Henderson's Global Technology Team, discuss this investment theme, its strong potential and where they believe the best opportunities lie.
The Henderson Global Technology Team's long-standing investment thesis is that technology will continue to be disruptive and take market share in the global economy. We have seen this play out through the advent of personal computing, the internet age, smartphones and more recently the move to cloud computing. The team believes artificial intelligence is the next major transformative force in the evolution of technology.
What is artificial intelligence?
AI can be loosely defined as the ability of machines to 'think'. This ranges from more basic pattern recognition mimicking human responses through machine learning using human reasoning as a guide, to more independent thinking. This is enabled by deep learning and neural networks1 that allow machines to write their own code and self-teach with less human intervention.
We are now at a stage where AI is being more widely applied in search algorithms, language translation, robotics, autonomous driving, as well as voice and image recognition. AI is not just about robots defeating humans in high profile games like 'Jeopardy' and 'Go' but about how computing and data are being utilised and opened up - democratised - so that algorithms can be created generating efficiency gains and breakthroughs in fields as wide ranging as healthcare, banking, agriculture, retail and transportation.
Current drivers of AI
The opportunities and challenges of AI have long been debated in popular fiction from Isaac Asimov's novels (1940s) through seminal Hollywood films such as Blade Runner (1982) and The Terminator (1984). While the technology has been around for a long time we have only recently seen an inflection point in both AI capability and investment. While it took more than 60 years for the television to be adopted by 80% of households it took the internet just over ten years. The pace of adoption of technology is accelerating and for AI in particular we believe this is happening due to a confluence of two main factors.
Chart 1: cost of data analysis is falling sharply
Note: FLOPS (Floating-point Operations per Second): a measure of compute performance speed; 1 GFLOPS equals 109 FLOPS.
Firstly, there is vast amount of data available created by the internet age and pervasive smartphones. Secondly, the ongoing collapse in the cost of compute, enabling data to be analysed at a faster and cheaper rate than ever before (chart 1). IBM estimates that 90% of the world's data was created in the last two years while a gigaflop (a measure of computer performance, eg by an algorithm or computer hardware) of compute now costs only 8 cents compared to $1.4 trillion back in 1961.
Strong potential for AI
We are still in the early stages of the evolution of AI technology. The pace of adoption is being accelerated by the application of cloud infrastructure (also a key theme within the Global Technology Strategy, which refers to the management of IT remotely; ie. buying, computing and storing technology from specialised service providers over the internet), and Moore's Law2. While we have seen an inflection point in deep learning, AI use cases3 today remain niche and limited to specific narrow problem solving.
Chart 2: brainpower is getting cheaper
However, the continuation of the exponential Moore's Law curve and the development of cloud infrastructure technology are driving costs down so while $1,000 bought the brainpower of a mouse in 2016, by 2023 the same $1,000 will be able to purchase the brainpower of a human, and by 2045 it will be able to buy you the brainpower of the entire human race (chart 2). Combined with even more available data to learn from, this creates strong potential for the future capabilities of AI.
Investment opportunities in AI
Chart 3: investment areas in AI
Like mobile phones in the early part of the millennium, AI today is a nascent market but the infrastructure and facilitation of AI is evolving rapidly. We see opportunities in a variety of areas such as analysis software, in cheap fast compute power, in data generation, in automation tools as well as a variety of new usages.
While there has been a lot of focus on robotics (which can be thought of as an automation tool) we view this as only one of the applications of AI. Customer service is being transformed by chat bots, voice recognition and predictive software while transportation is being rethought through autonomous driving (deep learning to train to understand how to drive instead of relying on manually-coded algorithms) and ride sharing.
The requirement for more data to be stored and analysed at faster speeds is creating many opportunities for AI application in software for companies such as Adobe, ServiceNow and Saleforce.com, as well as datacentre-focused semiconductor companies like Broadcom, Cavium and Xilinx. We view Alphabet (Google) as a key AI enabling platform as the company is already integrating AI technology into all of its services.
The potential pervasiveness of AI to disrupt a broader range of industries makes for a truly compelling investment opportunity. We believe this will continue to create opportunities for the Global Technology Strategy to invest in the longer-term beneficiaries of the AI trend and that there are multiple areas of investment potential. Over time, as the development of AI and its usage broadens we expect the investable universe to expand further as current venture capital investments become the IPOs (initial public offerings) of the future.
1Deep learning: involves feeding a computer system a lot of data, which it can use to make decisions about other data. This data is fed through neural networks - logical constructions which ask a series of binary true/false questions, or extract a numerical value, from all the data which pass through them, and classify it according to the answers received.
2Moore's Law: coined in 1965 by Intel co-founder Gordon E. Moore, it is the ability to roughly double the number of transistors that can fit onto a chip (aka integrated circuit) every two years.
3Use case: a software and system engineering term describing how users use a system to accomplish a particular goal.
References made to inpidual securities are for illustrative purposes only and should not constitute or form part of any offer or solicitation to issue, sell, subscribe or purchase the security.