Mobile

Google Home tops smart speaker IQ test, but rivals are gaining

Back in July, Loup Ventures published the results of an “annual digital assistant IQ test” pitting Google’s Assistant against Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana, ranking those four AI systems in descending order of performance when responding to 800 real-world questions. Now the firm is back with an “annual smart speaker IQ test” focused on how the same assistants perform through speakers such as Google Home, Apple’s HomePod, and Amazon’s Echo, and the results are somewhat interesting.

As was previously the case, Google Assistant on Home Mini led the group, once again with a 100 percent rate of understanding queries, and this time an 87.9 percent correct answer rate — up from 85.5 percent in the July smartphone test and 81 percent in Loup’s last smart speaker test back in February. Assistant’s responses beat all competitors in four of five categories, namely local, commerce, navigation, and information requests, coming in second place only in “command” requests, with 73 percent accuracy.

Apple’s Siri on HomePod posted strong gains over its middling February performance, but fell short of its July results when tested on an iPhone. Through the speaker, Siri answered 74.6 percent of queries correctly, up from 52.3 percent in February, while understanding queries 99.6 percent of the time. That’s similar to but down somewhat from the July correct answer rate of 78.5 percent achieved through an iPhone in July. Interestingly, Loup notes that Siri passes requests for “basically anything other than music” back to an iOS device for processing. Even so, HomePod led all rivals in the “command” category, with an 85 percent correct response rate.

Once again, Amazon’s Alexa and Microsoft’s Cortana came in third and fourth place, but both showed improvement in correctly answering questions through smart speakers. Alexa via Echo had a 72.5 percent correct response rate, compared with 64 percent via speakers in February and 61.4 percent with phones in July, a gain that placed it nearly neck and neck with the far more expensive HomePod. Cortana through a Harman Kardon Invoke speaker got 63.4 percent of responses correct, up from 57 percent in the February speaker test,and 52.4 percent in the July smartphone test.

According to Loup, proper nouns remain the largest pain point for the speakers, which otherwise comprehend virtually everything a user says to them. Apart from those, however, all of the assistants are making “meaningful” improvements, with Siri seeing gains from adding additional areas of expertise, and Alexa likely benefiting from crowdsourced knowledge such as Alexa Answers. Several of the AIs have added hooks to enable greater interactions with third-party partners, as well.

While Loup expects to update its smartphone digital assistant findings again in July, it doesn’t expect that the AIs will evolve to correctly answer everything they’re asked — instead, they’ll just be able to do more things, including controlling a wider array of devices, or to offer superior functionality within existing capabilities such as email, calendars, and messaging. At this stage, each improvement adds real value for users, so it’s going to be exciting to see whether the predictions prove correct next year.

Let’s block ads! (Why?)

Mobile – VentureBeat

Leave a Reply

Your email address will not be published. Required fields are marked *