Uncategorized

The Long Road to Artificial Intelligence

The Long Road to Artificial Intelligence

By Roger Hockenberry

As one of the hottest topics in the market today, Artificial Intelligence (AI) is becoming almost unavoidable. No matter where you look, articles, journals, companies, and people are all touting the delivery and the realization of an AI-enabled world is but a moment away. If we can take a step back or ignore the hype, it becomes clear that the data-driven, Intelligence-mimicking nirvana is still some distance away.

The Market Today

In looking at the market today, we can see that we’re barely turning the corner from the ingest and ordering of large data. Machine learning and complex event processing is only being applied solve to basic problems through the use of structured and unstructured data. While a step in the right direction, we’re still barely skimming the surface with our ability to handle large data sets. AI needs these underlying capabilities to realize its potential.

In order to understand why the use of AI is limited, we must first understand how AI uses data and analytics to operate. Typical analytical companies today fall into two basic categories: Data Aggregation Companies and Data Synthesis Companies. Data Aggregation Companies are very good at developing ways to count non-obvious objects (trains, boats, cars etc . . .), identify people, and use low-resolution data or imagery to extrapolate movement or rudimentary patterns of life. Data Synthesis Companies excel at tasks requiring complex event processing or synthesis of data by taking in and enriching sets of known inputs, analyzing and drawing conclusions, then providing a known set of outputs.

While both of these two types of analytical companies provide novel solutions, create a unique value proposition, and require a great deal of creativity, neither meet the criteria of a full AI.

Advice on starting with AI

When a company wishes to start incorporating elements of AI into their business, the key is to develop use cases that contain large data elements (volume, variety, velocity, veracity). These data elements must be coupled with decision points and allowable actions. Even though the processing of events between decision points and allowable actions can be come complex, this ‘action’ is essential in allowing for the full expression of AI; simply defining inputs and outputs is not enough. It’s important to remember that AI can only occur when actions, learning, and the assessment of results are allowed in a trusted, autonomous process and the AI is allowed to take autonomous action outside of human intervention.

 

The Lacking Components of Real AI

Action in AI

For anyone considering adopting AI, the key isn’t necessarily the development of ‘intelligence’, but rather how far we’re willing to let AI take real action and make decisions. The goal of ‘intelligence’ isn’t just to illustrate, discover, and report both obvious and non-obvious relationships, automate complex tasks, and simplify workflows. Real ‘intelligence’ requires making a decision and creating an action; these decisions are then followed by a series of additional actions to an indeterminate conclusion that may result in a looping function that recurs into a desired output is achieved.

Trust in AI

It’s easy to not trust in AI. Humans tend to be skeptical of allowing machines to make decisions, while happily allowing basic compute to control house and car functions, detecting fraud in banking, or helping find search results. No matter how accurate and precise input data can be, societies level of trust quickly drops relative to perceived errors in output. It’s clear, as humans, we have collectively drawn a line in what we consider actionable intelligence. Seemingly a cognitive dissonance still exists between allowing AI to make the same mistakes we make on a regular basis.

The Path to Fault Tolerance

For AI to be successful, we collectively will have to realize that any compute that we as humans make will be fallible. At the beginning of compute the industry as a whole decided that a strategy of COOP/DR, redundancy, and resiliency were best practices for businesses as they moved from paper-driven infrastructure to compute platforms.

AI will never be fully successful if we apply a test of infallibility to its intelligence. We must allow AI to take action, even if the action is incorrect and leads to a fault. AI must also be allowed and encouraged to discover and correct the fault. The trust factor in AI will be established not in the veracity of its output, but how quickly it can recover from faulty data inputs, and contradictory information – just like humans.