For many, the term “artificial intelligence” still conjures images of robot armies seen in movies such as “I, Robot” or perhaps Ava, the robot depicted in “Ex-Machina”, capable of human-level consciousness and social awareness.
In fact, that “consciousness” is one of the biggest challenges scientists have been working on when it comes to AI. We’re not there yet, but the applications of AI go far beyond an interactive, humanoid robot.
This includes current and developing mobile app technology. One of the keys to developing that consciousness lies in pattern recognition — what does that facial expression or that tone of voice indicate? In apps, AI also uses pattern recognition, except not at the level of sophistication that human-like consciousness requires.
So, how is AI being used by mobile apps? Let’s take a look:
Where Is AI Going for Software?
At a basic level, artificial intelligence is about pattern recognition, for example determining that a particular pattern of lines is indicative of a picture of a dog, whereas another looks like a cat. AI can recognize patterns in data, words, phrases, and images. It can even pick out habits of the human user from the data it collects.
If your smartphone has ever popped up when you start your car and connect to Bluetooth with a random message about how long it will take you to get to a place you go often, and what the traffic conditions are like, this is pattern recognition in action. It has collected data to suggest that you are probably going to that particular place at this time (between work and home is a common one).
AI develops its pattern recognition skills through machine learning — repetition over time. The suggestion that it will take you 15 minutes to get to a certain address at 8 am in the morning is because it has noticed over time that typically, that’s where you are going at 8 am.
Being able to formulate a hypothesis is one of the major developing skills for AI. The car example is relatively basic because it doesn’t need a lot of data for analysis, however, this is something that is further being developed, with help from the fact that processing power such as GPUs has become much cheaper and more accessible to regular people.
Deloitte proposes that AI is poised to be employed by enterprises in a big way:
“Deloitte Global predicts that by end-2016 more than 80 of the world’s 100 largest enterprise software companies by revenues will have integrated cognitive technologies into their products, a 25 percent increase on the prior year. By 2020, we expect the number will rise to about 95 of the top 100.”
Any field that is rich in data is ripe for an AI application, in which we are seeing ongoing developments. Here are a few examples we’ve found:
This could be bad news for anyone who’d prefer the critical eye of a human operator, but journalism has begun to use AI when it comes to the sorts of stories which can be generated based on data.
In 2016, the Associated Press announced it would use automated writing to cover the minor leagues, although they actually have been using the technology since 2014 when they began using it to cover stories on automated earnings reports.
The key is that with the technology developed by companies such as Automated Insights for natural language generation, they can tap into the vast reservoirs of data automatically, producing informational pieces which aren’t seen to need human consciousness (although they still employ automation editors).
#2. Productivity Apps
Google’s “G Suite” and Microsoft Office 365 are good examples of productivity apps which are employing AI to streamline and create efficiencies.
For example, users of this technology can access auto-generated responses for email replies which only require a short response.
“A year ago, Smart Reply launched, offering auto-generated replies for emails that only need a quick response. Now, more than 10 percent of all replies on mobile are sent using Smart Reply. The reception has been so strong that we’re continuing to apply machine intelligence across our suite to solve customer problems.” (Prabhakar Raghavan, VP Google Cloud).
Microsoft has been adding AI technology such as Office Graph and Delve. Office Graph is the underlying system which gathers data about key interactions between users and “objects” (such as documents or other content). Delve helps users to cut through the clutter of information and see the things most important and relevant to them first.
Chatbots have been on the rise in mobile technology, with some wide adoption and startup apps within the last year. The popularity of messaging apps is fueling growth, but we’re also seeing it in areas such as customer support for technologies.
Chatbots succeed best in environments where their application can be constrained. Why? Because they rely on machine learning and natural language processing (NLP – where computers can process text as humans would). Currently, if you were to ask something outside of the realm of the bot’s training, it is probably programmed to either refer you to a human operator or reply “I’m sorry, I didn’t understand that.”
Concierge apps such as Mezi for travel are a good example. This app uses machine learning and NLP to figure out the preferences of users and offer recommendations for travel, fashion or gift ideas they may like.
Salesforce’s “Einstein” is a great example of enterprise technology:
“Powered by advanced machine learning, deep learning, predictive analytics, natural language processing, and smart data discovery, Einstein’s models will be automatically customized for every single customer, and it will learn, self-tune, and get smarter with every interaction and the additional piece of data. Most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.”
#4. Speech Recognition
Speech recognition is another important tenet of AI, simply the ability to automatically and accurately recognize human speech. Conversational User Interfaces (CUI), powered by speech recognition are rapidly shaping up to be valid substitutes for the Graphical User Interface (GUI).
Of course, speech recognition isn’t new, it’s just that it’s making some solid improvements. If you’ve ever tried yelling your way through a dictation app of the past, you’ll appreciate what this means!
The improvements are helped by companies such as Google opening up access to their speech recognition API so that developers are free to use it as a basis for their own apps.
A current example being used in the enterprise is Sensory Inc.s inner-core partnership, which provides firms such as banks, financial institutions and other enterprises with enhanced data security. They provide face and voice biometric recognition, essential for apps with sensitive security requirements.
What about your app?
Do you see some examples here of AI technology which would be useful for your own app? AI has come a long way and is a key development area for future technology.
Apps are able to use AI based on machine learning and exposure to data, speech recognition, and natural language processing (such as what you’ll find with a chat app). The possible practical applications are already wide-ranging and being rapidly developed.
Will it be an “Ex-Machina” robot next? Who knows…
Koombea builds apps which can provide you with AI technology. Talk to us today about how we can help you.