Artificial intelligence: more important than fire?

Photo by Axel Buhrmann

By Axel Bührmann

While artificial intelligence (AI) has both its naysayers as well as its evangelists, there’s just no stopping the avalanche we’re going to see in coming years — both on the work front as well as in our personal space.

Having been more theory than fact for some 50 years, going through highs and lows, the AI genie really came out of the bottle over the past few years, and is currently enjoying a renaissance brought about by the emergence of such increasingly ubiquitous technologies as cloud computing, affordable wireless communications, the omnipresent smartphone and the Internet of things (IoT).

AI’s recent evolution has been so dramatic that pundits say the field is experiencing something akin to the Cambrian explosion, the geological era when most higher animal species suddenly burst onto the Earth. (Also see “Worldwide Spending on Cognitive and Artificial Intelligence Systems to Reach $57.6 Billion in 2021“)

It’s clear that AI will have massive social consequences, transforming modern life by reshaping everything from transportation and health, science, to finance. Researchers at Oxford University and Yale University predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.

Indeed, Google CEO Sundar Pichai says AI is one of the most important things humanity is working on.

“It is more important than electricity or fire,“ he adds. “It is fair to be worried about it. I wouldn’t say we’re just being optimistic about it, but we want to be thoughtful about it. AI holds some of the biggest advances we’re going to see.”

He has also described how AI is at the centre of Google’s strategy by saying that a key driver has been its long-term investment in machine learning and AI.

“Looking to the future,” Pichai stated, “we will move from mobile-first to an AI-first world.’

(Also see: “Why is AI important?”)

What is AI?

While science fiction often depicts AI as robots with human-like attributes, AI can incorporate anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Cisco defines AI as involving the development of computer systems able to perform activities that normally enlist human intelligence. “Such tasks include visual perception, speech recognition and decision-making. AI systems share common attributes: the ability to ingest data; the ability to adapt and react to data in their environment; and the ability to project multiple steps into the future. Machine learning, a certain application of AI, enables computers to use algorithms to give computers access to data to teach themselves.”

Photo by Axel Buhrmann

Artificial intelligence as it currently stands is known as narrow AI — or weak AI — which performs narrow tasks such as only facial recognition or only internet searches or only driving a car. However, the long-term goal is to create general AI, also know AGI or strong AI. While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI promises to outperform humans at nearly every cognitive task. But that’s a long-term view. And one many industry insiders believe may only occur at the turn of the century.

AI is not a new concept. Although humans have been pondering it for centuries (think Prometheus and Frankenstein’s monster) AI was born in 1956 when a think tank, comprising of a small group of mathematicians and scientists, proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”.

Now, five decades later, almost all the qualities of today’s technological moment—garrulous digital personal assistants like Siri and Alexa, genomic-research breakthroughs, instantaneous language translation, self-driving cars—have at their base one major, if broad, thing in common: AI.

And Microsoft optimistically points to a future where AI will help us do more with one of our most precious commodities: time. The company foresees that within 20 years, personal digital assistants will be trained to anticipate our needs, help manage our schedule, prepare us for meetings, assist as we plan our social lives, reply to and route communications, and drive cars.

Alongside Microsoft, the West’s largest tech firms, including Alphabet (Google’s parent), Amazon, Apple, Facebook, and IBM are investing huge sums to develop their AI capabilities, as are their counterparts in China.

IBM is betting $1 billion on its cognitive computing platform, Watson. Toyota is spending $1 billion on a lab to study autonomy. OpenAI, an initiative to build safe and general-purpose artificial intelligence, is being backed with another $1 billion. The Saudi-backed SoftBank Vision Fund, launched in October 2015, has around $100 billion to invest in technology companies, with a focus on AI and the ‘internet of things’.

But it’s not all plain sailing.

It’s long been argued that AI will soon take the jobs of humans, with technology figureheads such as Tesla founder Elon Musk and astronomer Stephen Hawking sounding warning bells about the apparent evil dangers that the development of AI could bring.

However, anyone with an interest in this emerging technology – and it is still very much emerging – seems to fall strictly into one of two categories: wildly pessimistic (“the robots will kill us all!”) or wildly optimistic (“we’ll live in a post-capitalist, robot-fuelled utopia!”).

Most controversies surrounding strong artificial intelligence — that can match humans on any cognitive task — centre around two questions: When (if ever) will it happen, and will it be a good thing for humanity? Techno-sceptics and digital utopians agree that we shouldn’t worry, but for very different reasons: the former are convinced that human-level artificial general intelligence won’t happen in the foreseeable future, while the latter think it will happen but is assuredly a good thing. The beneficial-AI movement feels that concern is warranted and useful, because AI-safety research and discussion now increases the chances of a good outcome. Luddites are convinced of a bad outcome and oppose AI.

Elon Musk: “With artificial intelligence, we are summoning the demon”

Stephen Hawking has stated the development of full artificial intelligence could spell the end of the human race, while Elon Musk says the technology is “potentially more dangerous than nukes.” He does maintain some investment in the field, just to keep an eye on its advancements.

“With artificial intelligence, we are summoning the demon,” Musk has said. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like—yeah, he’s sure he can control the demon. Doesn’t work out.”

No matter the recent advances, the purported singularity—when machines will be able to reason in ways surpassing the human mind—is still nowhere in sight. Even those most wary of AI’s advancement and future admit this in conversation sooner or later.

Where are we today?

At its simplest level, the blend of today’s massive computational power and the enormous storehouse of data that is the Internet has been the catalyst that has moved the AI revolution from theory to world-changing practice. Many of AI’s technologies have come about due to the timely maturing of such enabler technologies. The first wave came with the birth and explosive growth of the Internet and social media, which saw a staggering uptake in the last decade and has made data — in the form of images and text content – more abundant than once thought possible.

This sudden freely accessible and easily downloadable source of data is of great benefit to AI as it prevailed over the issues of procuring suitable quantities of test and training data to feed the algorithms necessary for AI to succeed.

Hard on the heels of that life-changing first wave came a second wave of new technologies such as widespread and cost-effective wireless communications, the ubiquitous smartphone and the highly-complementary Internet of things (IoT), all of which enabled products, services and devices that generated even more potentially useful data.

Let’s not forget the emergence of cheap storage in the latter half of the last decade, which also led to the notion of Big Data, cloud storage repositories, or data lakes.

It’s in the cloud

Cloud computing has played a massive role in the resurgence of AI over the last decade. Photo by Axel Buhrmann

Experts point out that arguably the most important precursor technology that has advanced AI development over the last decade and a half has been the emergence of cloud computing.

Cloud services emerged in the early to mid-2000s, with the birth of
• Infrastructure as a Service (IaaS),
• Software as a Service (SaaS), and even
• Platforms as a Service (PaaS).

These can all transform how companies of every size – from the smallest to the biggest – could manage their data centres, host their applications and integrate their IT departments into the business.

Cloud computing has played a massive role in the resurgence of AI over the last decade, because AI has at its core a need for readily-available and oodles of data, which had previously been a major restriction upon research.

Hard work ahead for enterprises

The staggering pace of innovation over the past few years has predominantly come from small vendors, states Gartner, pointing at deep learning, natural-language processing (NLP) and computer vision as the leading areas of rapid technology advancement. This is where companies need to build knowledge, expertise and skills.

In addition, reveals Gartner, recent breakthroughs in machine learning, big data, computer vision and speech recognition have increased the commercial potential of AI.

However, be forewarned: Forrester Research states that the honeymoon for enterprises naively celebrating the cure-all promises of AI technologies is over. Forrester predicts that 2018 will be the year when a majority of enterprises start dealing with the hard facts: AI and all other new technologies like big data and cloud computing still require hard work.

Gartner agrees, and points out that AI requires new skills and a new way of thinking about problems. IT must own the strategy and governance of AI solutions. Although pilot AI experiments can start with a small investment, for full production rollout, the biggest area of investment is building and retaining the necessary talent.

2018 will be the year when a majority of enterprises start dealing with the hard facts: AI and all other new technologies like big data and cloud computing still require hard work.

These skills include technical knowledge in specific AI technologies, data science, maintaining quality data, problem domain expertise, and skills to monitor, maintain and govern the environment.

Market conditions for commercial success with AI technology are well-aligned, making AI safe enough for CIOs to investigate, experiment with and strategise about potential application use cases.

The research company adds that capabilities like voice recognition, NLP and image processing benefit from advances in big data processing and advanced analytical methods such as machine learning and deep learning.

Leading-edge AI technologies will play an increasingly important role in the top three business objectives often cited by CEOs — greater customer intimacy, increasing competitive advantage and improving efficiency.

Gartner advises CIOs to look for cloud SaaS applications that apply AI to these areas. Greater experience with AI solutions will help CIOs to build business cases and identify the limitations in current-generation technologies to understand skills needed to fill talent gaps.

And in your personal space?

By 2022, Gartner paints a scenario where personal devices will know more about an individual’s emotional state than his or her own family. AI is generating multiple disruptive forces that are reshaping the way we interact with personal technologies. (See “Artificial Intelligence Is a Game Changer for Personal Devices“.)

The current wave of emotion AI systems is being driven by the proliferation of virtual personal assistants (VPAs) and other AI-based technology for conversational systems. As a second wave emerges, AI technology will add value to more and more customer experience scenarios, including educational software, video games, diagnostic software, athletic and health performance, and the autonomous car.

Within two short years, 60 percent of personal technology device vendors will use third-party AI cloud services to enhance functionality and services.

As mentioned above, cloud-based AI technologies are driving compelling user experiences on a variety of connected devices. Gartner says cloud offerings from the big tech players, such as Google, Microsoft, Amazon, Tencent, Baidu and IBM, are starting to proliferate due to their attractive cost model, easy-to-use integration and potential to create complex services.

A major catalyst for device vendors to use cloud AI services is the increased usage of VPAs and natural-language technologies, while the adoption of VPA-based, screenless devices such as Amazon Echo and Google Home is also on the rise, further increasing usage of cloud AI services.

 

Sources

  • When Will AI Exceed Human Performance? Future of Humanity Institute, Oxford University and the Department of Political Science, Yale University
  • TIME artificial intelligence: The Future of Humankind
  • House of Bots: artificial intelligence: Another bubble or a game changer?
  • Futurism: Separating Science Fact From Science Hype: How Far off Is the Singularity?
  • Future of Life: Benefits and risks of artificial intelligence.
  • SAS: artificial intelligence History
  • Microsoft: The Future Computed: artificial intelligence and its role in society
  • Gartner: 2018 Will Mark the Beginning of AI Democratization
  • Gartner: The CIO’s Guide to artificial intelligence
  • Gartner: Gartner Says artificial intelligence Is a Game Changer for Personal Devices
  • Alasdair Gilchrist — Thinking Machines (A.I: The Path toward Logical and Rational Agents)
  • Toby Walsh – It’s alive!

Glossary of A.I. Terms

Artificial intelligence
A.I. is the broadest term, applying to any technique that enables computers to mimic human intelligence, using logic, if-then rules, decision trees and machine learning.

“High-level machine intelligence” (HLMI)
Achieved when unaided machines can accomplish every task better and more cheaply than human workers.

Machine Learning
The subset of A.I. that includes statistical techniques that enable machines to improve at tasks with experience.
Deep Learning
The subset of machine learning composed of algorithms that permit software to train itself to perform tasks, like speech and image recognition, by exposing multi-layered neural networks to vast amounts of data.
Neural Networks or Neural Nets
Software constructions modelled after the way adaptable networks of neurons in the brain are understood to work, rather than through rigid instructions predetermined by humans.
Big Data
Extremely large data sets that are used for computational analysis, many for neural nets to reveal patterns or trends.
Singularity
The hypothesized time/state at which superintelligent machines begin improving themselves without human involvement.
Natural-Language Processing
The computer processing that takes place in speech-recognition technology, in which software is able to recognize spoken sentences and is able to re-create spoken language into text.
Quantum Computing
A computing form that combines digital computing with quantum physics. Quantum computers abide by principles such as superposition and utilize qubits, or quantum bits. Quantum computers operate about 100 million times as fast as personal computers and, crucially, can perform simultaneous calculations at rates that increase exponentially

RECOMMENDED REFERENCES

Videos

Media Articles

Essays by AI Researchers

Research Articles

Research Collections

Case Studies

Blog posts and talks

Books

Organisations

  • Machine Intelligence Research Institute: A non-profit organization whose mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.
  • Centre for the Study of Existential Risk (CSER): A multidisciplinary research centre dedicated to the study and mitigation of risks that could lead to human extinction.
  • Future of Humanity Institute: A multidisciplinary research institute bringing the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.
  • Partnership on AI: Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
  • Global Catastrophic Risk Institute: A think tank leading research, education, and professional networking on global catastrophic risk.
  • Organizations Focusing on Existential Risks: A brief introduction to some of the organizations working on existential risks.
  • 80,000 Hours: A career guide for AI safety researchers.

 

  • Source: Future of Life Institute

Leave a Reply

You must be logged in to post a comment.