The future of data science – what’s on the horizon?

In the second instalment of his two-part blog series, Finn Wheatley, Director of Data Science, discusses what the future holds for data science.

It has, I believe, been clear for the past 5 or 10 years that research advances in machine learning techniques (i.e. algorithmics) have been slowing down. This is being acknowledged by some of the leading practitioners in the field.[1] Breakthroughs are still made, of course, but they are incremental, not transformative. The advances of data science in the past decade have been largely achieved by research breakthroughs that occurred 20 years before then. Open AI’s GPT-3 has recently produced stunning results in natural language interpretation and generation for example, constructing an English language model with around 150 billion parameters. It has also been described, perhaps unkindly, as a very sophisticated auto-complete.[2] It fails to show the same remarkable results in areas demanding automated reasoning, which illustrates the limits of the sub-symbolic approach. This has been noted in other areas that use deep learning as well, such as autonomous vehicles. The algorithms constructed by deep learning are usually relatively fragile, and do not generalise well.[3] The deep learning approach is also cost-prohibitive, even sometimes for the largest organisations.[4] Perhaps a truly transformative breakthrough that dramatically reduces compute costs in an area such as quantum computing could change this, but that must remain speculative for now.

The views of Steven Hawking and Elon Musk notwithstanding, we can be reasonably confident that we are not going to see Skynet, C-3PO, Hal 9000, Cortana, or any other Artificial General Intelligence in the next few decades. However, this presents a new problem – what are we going to do without them – where will the next breakthrough in data science come from?

AI + CI = AGI?

For a classically trained computer scientist, the obsession with machine learning, much less artificial intelligence, has always been rather strange, almost off-balance. Like an obsession with an important part of a machine, but which lies within a greater whole. A bit like trying to design an aeroplane with only wings but no engines or fuselage.

In the beginning (i.e. pre-2000), artificial intelligence meant something else. Artificial intelligence meant symbolic intelligence – or ‘intelligent systems’, driven by logical relations, rather than data. What we now call artificial intelligence or machine learning was called computational intelligence, or sub-symbolic intelligence, meaning it was driven by data, with no overarching logical understanding of how the world worked. 

In the early 1980s, a form of (symbolic) AI called Expert Systems briefly rose to prominence. This seems fairly unimpressive today, consisting essentially of highly developed flow charts encoded in software capturing so-called expert knowledge. However, in the genesis of corporate technology, Digital Equipment Corporation estimated it saved $40m in six years using just one expert system called R1. The failure of these systems to live up to the hype surrounding them led to the ‘AI winter’ that set in around 1987. The subsequent explosive advances in (machine learning driven, sub-symbolic) CI (now rechristened ‘AI’) over the 1990s meant symbolic AI systems never regained favour (the existence of the ‘Symbolic Systems’ major at Stanford is one of very few hold-overs).[5]

However, these systems had one immensely powerful tool, albeit in prototype… they could hold the notion of abstract representations of objects that could be combined (e.g. taxonomies) with the notion of truth and falsehood. This allowed the evaluation of logical syllogisms, and thus the creation of new knowledge. This symbolic ‘true AI’ had a power that ML-driven systems did not – they could generalise very easily. They were programmed to isolate exactly what mattered and infer from it, throwing away extraneous information.

Looking forward – the next stage in data science and AI

I believe the next frontier we must conquer in the algorithmics of data science and artificial intelligence is to develop mechanisms to meld these two immensely powerful forms of understanding – (symbolic) logic and reason on one side and (sub-symbolic) data and experience on the other.

There are of course other areas, such as Bayesian Networks, evolutionary computation, decision systems and cybernetics, where further integration with artificial intelligence will likely be a precondition for developing ‘strong’ or general AI. The central point is that a machine learning algorithm that can form generic abstract concepts from data, test them for truth based on new input data, and apply them elsewhere would probably be easily the most transformative step data science could take in the coming decades. It could reduce the amount of data needed to ‘learn’ by multiple orders of magnitude, and massively improve the potential to generalise data science models into new domains. Since the need for large amounts of high quality data is one of the main drawbacks of the deep learning approach, this could be a major step toward Artificial General Intelligence.

For those who think this all sounds a little bit too science fiction, it’s important to point out that some of the world’s best researchers are currently working on different strands of this problem. The ‘godfather of deep learning’, Geoffrey Hinton, is building what he calls ‘capsule networks’, designed to identify different parts of an object in an image and the relationships between them – in other words, layering abstract knowledge into machine vision.[6] François Chollet, the inventor of the Keras library and an AI scientist at Google, has developed the Abstract Reasoning Corpus (ARC), a dataset of problems (similar to the Bongard problems commonly found in IQ tests) that will allow the assessment of a machines ability to reason on a fair plane with a human, without requiring either any prior knowledge about the world (which humans possess but a computer lacks), or the numerous training examples DNNs typically require.[7] Very similar work is being done to developing AIs that can work abstractly to solve CAPTCHAs.[8]

All of this suggests that the combination of abstract reasoning and observational data has the potential to provide the next transformative leap in our ability to solve more complex problems, build more sophisticated analytics, a more intelligent world, and perhaps even a more human computer. Driven by companies at the very forefront of technology development, a new era of exciting developments in artificial intelligence may be just over the horizon.

At Whitehat Analytics, we provide large enterprises with the knowledge to use their data effectively and to support their transition to a data-driven future. Our whitepaper outlines five key steps to becoming a data-driven organisation, click here to download.


About the author

Finn Wheatley, Director of Data Science

Finn has over a decade of experience working in lead data science and quantitative roles in both the public and private sectors. Following his undergraduate degree from King’s College London, Finn worked for several years in the hedge fund industry in risk management and portfolio management roles. Subsequent to an MSc in Computer Science from University College London, he joined the civil service and helped to establish the data science team at the Department for Work and Pensions (DWP), delivering innovative analytical projects for senior departmental leaders. Since joining Whitehat Analytics, he has been involved in establishing the data science team at EDF Energy.

References

[1] https://www.wired.com/story/prepare-artificial-intelligence-produce-less-wizardry/ and https://www.technologyreview.com/2017/11/30/147363/progress-in-ai-isnt-as-impressive-as-you-might-think/

[2] https://insidebigdata.com/2020/12/08/have-a-goal-in-mind-gpt-3-pegasus-and-new-frameworks-for-text-summarization-in-healthcare-and-bfsi/

[3] https://www.nature.com/articles/d41586-019-03013-5, https://www.economist.com/technology-quarterly/2020/06/11/driverless-cars-show-the-limits-of-todays-ai and https://www.wired.com/story/sobering-message-future-ai-party/

[4] https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-model/ and https://www.mic.com/p/artificial-intelligence-development-is-starting-to-slow-down-facebook-head-of-ai-says-19424331

[5] https://www.technologyreview.com/2019/01/25/1436/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/

[6] https://www.utoronto.ca/news/how-u-t-s-godfather-deep-learning-reimagining-ai and https://bdtechtalks.com/2020/03/02/geoffrey-hinton-convnets-cnn-limits/

[7] https://bdtechtalks.com/2019/12/03/francois-chollet-arc-ai-measurement/

[8] https://bdtechtalks.com/2020/11/16/captcha/