Neural networks and the broader category of cognitive computing have certainly blossomed in the past couple of years. After more than three decades of academic investment, neural networks are an overnight success. I think three forces have triggered this explosion of new technology (and hype).
First, the Internet has aggregated previously unimaginable reservoirs of raw data, capturing a vivid, comprehensive, but incoherent picture of the real world and human activity. This becomes the foundation from which we can train models of reality, unprejudiced by oversimplified synopses.
Second, progress in computing and storage, have made it practicable to implement large-scale model training processes, and to deploy useful inference-based applications using those trained models. Amid hand-wringing over the so-called “death of Moore’s Law” we actually find that a combination of increasingly efficient engines and massively parallel training and inference installations is actually giving us sustained scaling of compute capability for neural networks. Today, GPUs and FPGAs are leading hardware platforms for training and deployment, but we can safely bet that new platform architectures, build from direct experience with neural network algorithms, are just around the corner.
Third, we have seen rapid expansion of understanding of the essential mechanisms and applications of neural networks for cognition. Universities, technology companies and end-users have quickly developed enthusiasm for the proposed benefits, even if the depth of knowledge is weak. This excitement translates into funding, exploratory developments and pioneering product developments.
These three triggers – massive data availability, seriously parallel computing hardware, and wide enthusiasm – set the scene for the real work of bringing neural networks into the mainstream. Already we see a range of practical deployments, in voice processing, automated translation, facial recognition and automated driving, but the real acceleration is still ahead of us. We are likely to see truly smart deployments in finance, energy, retail, health care, transportation, public safety and agriculture in the next five years.
The rise of cognitive computing will not be smooth. It is perfectly safe to predict two types of hurdles. On one hand, the technology will sometimes fail to deliver on promises, and some once-standard techniques will be discredited and abandoned in favor of new network structures, training methods, deployment platforms and application frameworks. We may even think sometimes that the cognitive computing revolution has failed. On the other hand, there will be days when the technology appears so powerful as to be a threat to our established patterns of work and life. It will sometimes appear to achieve a level of intelligence, independence and mastery that frightens people. We will ask, sometimes justifiably, if we want to put decision making on key issues of morality, liberty, privacy and empathy into the hands of artificial intelligences.
Nevertheless, I remain an optimist, on the speed of progress and depth of impact, and well as on our ability and willingness to shape this technology to fully serve human ends.
One thought on “Cognitive Computing: Why Now and What Next?”
Great writeup Chris!
There will be much shorter & rapid (exciting but sometimes scary) cycle of Birth-Death-Rebirth on the ways and methods we fundamentally use to ‘interprete’ the world around us thru computers. It may be too fast than what we are comfortable with till now.
It appears that any company which deals with complex and unstructured data MUST have AI/NN component in their thinking – doesn’t matter if the co. belongs to computer field or not. eg vlsi EDA tools, PLM and even diagnosis/early warning based on so much of medical tests data we generate over the time for every individual.
Comments are closed.