Despite the controversy it sparks, artificial intelligence is no longer an exotic rarity that only thrives in research labs and ivory towers; instead, AI applications are becoming mainstream in the public and private sectors.
Over the last year, several developments on the artificial intelligence (AI) front have occurred that reflect our wildest fantasies and worst fears for this technology. Here are a few examples:
A battle continues to rage between MIT linguist Noam Chomsky and Google Director of Research Peter Norvig over the increased use of statistics and probability in AI. Chomsky argued that the “new AI” is merely mimicking behavior instead of unraveling the rules and processes of cognition. On the other hand, Norvig takes a more practical, probabilistic approach, believing in AI’s suitability for natural language processing, for instance.
Last month, CNBC reported that inventor Elon Musk and physicist Stephen Hawking expressed concerns about the future of AI, suggesting that there are dangers in the fledging AI market. They made it easy to surmise they fear a Robopocalypse caused by AI run amok!
Recently, a Russian chatbot successfully passed the “Turing test,” an intelligence test devised by the British mathematician Alan Turing, in which a machine is deemed “intelligent” when it is indistinguishable from a human when in natural language conversation with a human judge. The chatbot convinced 33 percent of the judges that it was a 13-year old boy from Ukraine.
Of course these events portend neither apocalypse nor nirvana. Instead, they are a solid demonstration of the continued evolution of AI.
This week, a newscientist.com article described the success of “niche AI” applications like the one that schedules the massive Hong Kong subway system and another that sorts passport applications. I believe modern approaches to AI will be like these niche apps. In other words, we will use probability to match a formal model to a current problem so we can execute rules or other cognitive processes.
Applying AI techniques to enterprise applications is also becoming more commonplace with business rule management systems via tools like JBOSS BRMS, ontology development via tools like protégé and other semantic Web platforms.
If you are interested in experimenting with these concepts on a small scale, I recently released a free Knowledge Base editor called EZKB (for Easy Knowledge Base) that demonstrates AI techniques in a layered, building block approach. Each tab in the application represents a level in the semantic stack: from facts to entities to relationships, to rules, to triggers, to output actions and views. The software is merely a side hobby of mine and not a polished product but it will give you a good basis to understand these techniques. Check the YouTube videos for more explanation and use cases.
So, what does the evolution of AI mean for government IT managers? It means that basic AI techniques are within the reach of every application developer and can be deployed to improve almost every new IT system under development. These technologies are no longer “exotic rarities” that only exist in research labs and ivory towers; instead, they are now moving to mainstream applications that need can scan data, identify patterns, execute rules and take automated or semi-automated action.
Michael C. Daconta (email@example.com or @mdaconta) is the Vice President of Advanced Technology at InCadence Strategic Solutions and the former Metadata Program Manager for the Homeland Security Department. His new book is entitled, The Great Cloud Migration: Your Roadmap to Cloud Computing, Big Data and Linked Data.
NEXT STORY: The hidden dangers of big data tools