Artificial intelligence

Artificial Intelligence: Silicon Valley Tech Giants continue their race to compete for AI advantage

Published on 11th Nov 2016

Artificial Intelligence (AI), an idea first conceived in the 1950s and a reliable sci-fi staple in the backlots of Hollywood studios ever since, is now a one of the hottest topics in Silicon Valley. Whilst notoriously difficult and nebulous to define as a concept, AI is fast finding its way into our social networking, homes and business tools, so it is safe to say AI in some form or other has very much arrived on the scene.

Many start-ups are investing heavily in its development. Our office neighbour, for example, has developed an AI bot for his business Codify Academy, a relatively simple time-saving tool. But it isn’t just the disruptors pouring their sweat, tears, ones and zeros into the area, the leviathans of the technology world (both hardware and software manufacturers) have been investing heavily in AI for a number of years.

Although AI is “still rather nascent” to some major players in the tech industry (quoting Dian Bryant, executive GP and general manager of Intel’s Data Center Group), there is no doubt that the major players see it as a potentially huge growth area where they want to be first at the watering hole. Companies such as Facebook, Microsoft and Google are making major forays into AI. With news like this coming out every day it can be difficult to keep track.

So, in an attempt to wrap our collective heads around it all, we have tried to provide a (relatively) succinct bird’s eye view of what Silicon Valley is up to in AI world. In an industry which moves faster than Usain Bolt, we take a snapshot of what the technology cognoscente are up to when it comes to AI, first considering the brainpower the hardware players are developing, and then turning to the software, which is sparking that brainpower into life.

What have the hardware manufacturers been up to?

Intel

Starting in the world of hardware manufacturers, just recently (August 9) Intel, the world’s largest chip manufacturer, acquired Nervana Systems, a deep learning AI start-up founded in 2014. The deal value was estimated at over $400 million and sent ripples throughout Silicon Valley. Nervana is an optimized software and hardware stack developed for deep learning – an AI process which involves training artificial neural networks and brain-inspired algorithms in order to analyze and gain insight from a wide variety of data.

Intel’s purchase of Nervana will advance Intel’s AI portfolio and enhance the deep learning performance of its latest generation of high-performance computer processors. To some, this has represented a move by the microprocessor manufacturer to challenge companies such as Nvidia, a leader in graphics processing (GPUs) for servers, AI and deep learning, as well as bolstering Intel’s credentials in the high performance computing market. According to Bryant, Nervana’s “IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI.” Intel clearly sees the integration of machine learning technologies in its data center management as key to ensuring future performance and processing capability.

Intel has made a number of other acquisitions recently. In September it announced that it is to acquire computer vision firm Movidius to boost its RealSense technology and also help its development of deep learning solutions.

To companies like Intel, “AI is embedded in many aspects of our life. AI is all around us, enabling speech recognition, image recognition, fraud detection, and self-driving cars. Encompassing compute methods like advanced data analytics, computer vision, natural language processing and machine learning, artificial intelligence is transforming the way businesses operate and how people engage with the world” said Intel Executive and Data Centre Group Manager Diane Bryant.

IBM

We couldn’t write a piece on AI without mentioning what IBM has been up to with Watson – IBM’s cognitive AI supercomputer. Watson itself, a computer system which can answer questions given to it in natural human language, was first dreamed up by IBM researchers over a decade ago and then developed as part of its DeepQA project. According to its developers, its cognitive computing capabilities allow it to interpret and contextualise relatively unstructured data. Learning over time, Watson’s recommendations apparently improve with age (although you won’t see us buying its cookbook just yet).

Watson has come on leaps and bounds since winning the TV show Jeopardy, and now has a huge number of commercial applications, perhaps most notably a clinical decision support system for use by the medical profession. Watson combines symptomatic information from clinicians and combines it with mined patient data to find relevant fact patterns to a patient.

In August 2016, IBM announced that it will be using Watson for weather forecasting, analysing data from over 200,000 weather sources under a project named Deep Thunder (perhaps in homage to Deep Blue, the IBM chess computer which beat Gary Kasparov). Back in the world of healthcare, IBM Watson Health just launched a new company, Pensiamo (in partnership with the University of Pennsylvania Medical Centre), which will aim to help healthcare operators with supply chain and services streamlining and management. IBM Watson Health, launched in 2015, has entered into many partnerships with major hospitals and pharmaceutical companies and made acquisitions valued at over $4bn.

IBM doesn’t just have its AI fingers in healthcare. Its security division announced this year that it is collaborating with a number of universities in order to tackle and solve cybercrime, using Watson Cyber Security, a new cloud-based version of the company’s cognitive technology. Watson Cyber Security will learn by analyzing and processing huge amounts of security reports and unstructured data fed to it by students.

IBM Watson itself has made AI based acquisitions as well, purchasing Cognea, an AI based virtual assistant (think Apple’s Siri). IBM appears to be strategically shifting from computer hardware to services in an attempt to improve its revenue streams – bringing us on to our next group of tech giants…

What about the software guys?

Google

Google and Alphabet have been extremely acquisitive in recent years, although have stayed steadfastly silent about exactly what they are up to. To some, Google is currently the dominant power when it comes to AI.

Perhaps its most famous acquisition was DeepMind, a UK AI start-up which Google splashed out a cool $500 million for in 2014. A prerequisite for DeepMind to complete the deal was that Google set up an AI ethics board, which remains one of its closely guarded secrets (Google refuses to divulge its board and participants). DeepMind has developed a neural network that learns how to play computer games in a way similar to that of the human brain, and shot to fame recently when its AlphaGo program beat a human at the game Go (a huge step from Deep Blue’s victory over Gary Kasparov at chess in 1996).

Google also announced that it has developed a new kind of chip, called “tensor processing units” (TPUs), which are tailor made for machine learning. According to Google, TPUs offer “advanced acceleration capabilities” for workloads like Google’s TensorFlow, an open source software library for its machine intelligence and deep learning framework which is used by a large number of Google products.

Google has just recently announced Pixel, its long rumoured (and much awaited) foray into total mobile hardware and software integration. Google Assistant (which replaces Google Now) is its artificially intelligent digital helper/concierge, or as Google says ‘your own personal Google’. Google Assistant now interprets conversations and taps into Google’s own huge collection of data as well as user generated data, creating a digital and personalized ‘PA’ by making use of Google’s entire ecosystem.

The voice assistant is baked into Google Pixel and Google Home (Google’s Alexa equivalent), creating hardware portals to its AI based platform. While the Pixel launch event seemingly focused on Google’s move into hardware, many felt that the real focus was on the AI based assistant that sits behind it.

Microsoft

Microsoft has Cortana, an AI powered information navigation and personal organisation tool. This, together with Android’s Google Now, Amazon’s Alexa and Apple’s Siri, is a digital personal assistant, a technology which all of these companies seem to believe signifies the most important technological development being worked on at the moment.

When recently announcing its new “conversation as a platform” strategy at a developer conference, Microsoft’s CEO Satya Nadella stated that he believes that “human language is the new user interface layer” which will be “applied to all computer interfaces“, perhaps signalling a new era of internet accessibility and a move away from text based internet access using Google’s web browser.

In August 2016, Microsoft announced that it has agreed to buy Genee, an AI personal assistant app which uses natural language processing, which will be used to enhance Cortana and its Office 365 capabilities. In February, Microsoft also snapped up an AI keyboard app called SwiftKey (reportedly for up to $250m in cash).

Facebook

Facebook is a perfect example of a company which is now successfully expanding on its AI capabilities. “Facebook is in a unique position that we can build an AI component and put this in the loop of human interaction and our system is able to learn from humans,” said Joy Zhang, a software engineer at Facebook Applied Machine Learning. Earlier this year Facebook released its machine learning platform, FBLearner Flow, which allows its developers who do not have specialised AI expertise to use products that use machine learning.

In April, at the F8 developer conference, Facebook announced that it would open its Messenger Platform, enabling businesses and developers to build bots for its Messenger App. For those unfamiliar with the concept, chatbots are a form of computer program powered by AI which can respond to human requests in an intelligent manner. Just like a Google web search, a request to a chatbot can be about anything – from booking your hotel, to speaking with your bank, to confirming the order of your new robot vacuum cleaner. Facebook has released these tools (including certain Facebook APIs and its Natural Language Assistance bot engine), which are now readily available and enable businesses to access FB’s ecosystem and connect with their customers and clients more easily. Forbes recently reported that already more than 11,000 chatbots have been built and over 25,000 developers have signed up to the platform’s bot engine. As we mention above, our next-door neighbours have designed a chatbot (which they refer to as a conversational AI…with a personality) for their Codify Academy.

Facebook has also decided to open-source a set of computer vision software tools that can identify both the variety and the shape of objects within photos. The tools, developed by the Facebook AI Research (FAIR) team, are called DeepMask, SharpMask, and MultiPathNet, and all three work in concert to help break down and contextualize the contents of images.

In a recent interview with TechCrunch, Facebook’s Director of Engineering for Applied Machine Learning Joaquin Candela said that “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.” According to Candela, 25 percent of Facebook engineers have used its AI platform to build features and carry out their responsibilities. This AI helps rank news stories and automatically write closed captions for video advertisements (increasing their average viewing time by 12 percent).

Facebook recently gave 22 servers to academic institutions which are specifically tailored for AI research, the designs for which were released on an open source basis at the end of 2015. David Marcus, Facebook’s head of Messenger, said bots could be a “new paradigm”, creating new companies in the same way apps have generated an entire on-demand economy, with companies such as Uber and Just Eat.

Amazon

Last but not by any means least, Amazon is no longer just the game-changing e-commerce platform that emerged a number of years ago, nor is it just a massive media and content curation platform or a cloud computing company.

Meet Amazon’s Echo, a standalone voice-enabled wireless speaker system powered by its Alexa Voice Service (named after the library of Alexandria), its AI powered personal assistant which has great potential for the smart-home world (something, by the way, which Mark Zuckerburg has made his personal goal to crack this year). Amazon is perhaps the first company to place its “AI” into a freestanding product, which already integrates with a number of in-home tech, including Sonos and Google’s Nest (not to mention ordering your Uber for you).

Amazon, like Facebook, has also unveiled – and given away – DSSTNE (not the most catchy acronym, although it is pronounced “destiny”), an open-source AI framework which was built for deep learning. DSSTNE was released on GitHub in May this year, and its ability to learn using multiple graphics processing units (GPUs) at once has already turned some heads when run alongside Google’s TensorFlow.

Interestingly, digital assistants seem to be making a (courageous) move away from the previous dominance of mobile phones, paving the way for multi-portal systems which allow users to interact both on the move and in their homes. “Five years ago, if we were talking about this, there was the belief that the phone would be the interface to everything,” says Alan Black, a computer scientist at Carnegie Mellon University’s Language Technologies Institute. It seems that this is no longer the case, as a number of tech companies are making a play to put interfaces everywhere you are – whether that is in your kitchen, living room, car or back pocket.
As AI products become less abstract and have increasing day-to-day applications, these devices may become mere conduits to deliver a seamless AI experiences to consumers.

Afterthoughts

In case you haven’t observed any pattern above, it appears that the major global tech companies intend to dominate this market, and are now acquiring AI businesses that could otherwise potentially pose a disruptive threat in the future. The thought behind this strategy may be “if you can’t beat them, join them…or just acquire them“.

The sums being invested in this technology and the increasing amount of resource allocated to AI are testament to its wide range of capabilities and potential to have a huge impact in the technology sector.

It isn’t just the tech giants trying to get a slice of this blackbird pie. Etsy, the peer-to-peer e-commerce website focused on handmade or vintage items and supplies, recently acquired Blackbird Technologies to bring AI to its search engine.

We will be keeping a close eye on all developments AI-related while considering the legal and regulatory implications associated with it. For now though, it appears that Steven Spielberg and James Cameron were right – AI and robots are coming, although thankfully not, for the moment at least, in Arnold Schwarzenegger form.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?