Network speech recognition will be the next big wave…
Source: google.com Get used to the idea of talking to all your consumer electronics (CE) devices as a primary means of interaction -- for this is the future of human interface design, particularly with mobile communication devices. As mobile speech recognition technologies continue to improve in their efficacy, the vendors of the speech technology platforms are making concerted efforts to enable the long tail of mobile application developers with speech recognition capabilities. According to the latest market study by ABI Research, the efforts of companies such as Nuance, AT&T, and iSpeech should be noted for exposing their APIs and developer programs as the foremost strategy in reaching the long tail of mobile device applications. "Reaching a varied group of developers working on different operating system (OS) and hardware platforms makes cloud based solutions the optimum approach to enabling the masses," says Michael Morgan, senior analyst at ABI Research. It is the approach of using network based solutions that will drive the rapid increase in cloud based service revenues.
Historically, mobile speech recognition was delivered to consumers through relationships between device OEMs and platform vendors. The other route to the consumer came through virtual assistant applications that were often developed by the platform vendors.
Smaller application development efforts lacked the resources and expertise to bring the benefit of speech recognition to their products. This dynamic has kept speech recognition trapped in functionally specific applications.
Leveraging the cloud as a delivery mechanism, platform vendors can enable nearly any application developer that wishes to make its user interface experience more efficient.
ABI Research expects that consumers will first see the benefits of these efforts in mobile banking and retail applications. Their new "Speech Recognition in Mobile Devices" report provides further details on the market for speech recognition in mobile devices in addition to a discussion on leading players in this market.
Historically, mobile speech recognition was delivered to consumers through relationships between device OEMs and platform vendors. The other route to the consumer came through virtual assistant applications that were often developed by the platform vendors.
Smaller application development efforts lacked the resources and expertise to bring the benefit of speech recognition to their products. This dynamic has kept speech recognition trapped in functionally specific applications.
Leveraging the cloud as a delivery mechanism, platform vendors can enable nearly any application developer that wishes to make its user interface experience more efficient.
ABI Research expects that consumers will first see the benefits of these efforts in mobile banking and retail applications. Their new "Speech Recognition in Mobile Devices" report provides further details on the market for speech recognition in mobile devices in addition to a discussion on leading players in this market.
No comments:
Post a Comment