Using machine learning Apple engineers will deliver voice assistant from the computer voice. Siri now understands, with what intonation and emphasis to speak to the user.
Before the launch of iOS 11 Apple engineers have published several articles about how they have improved the speech voice assistant company. In addition to voice recording, which can be used for answers, the developers are faced with the problem of prosody — the patterns of stress and intonation in spoken language. The problem is compounded by the fact that the process combination of sounds is very heavy smartphone. Here comes to the aid of machine learning.
With enough data, machine learning helps to convert plain text into understandable speech. The system itself understands, with what intonation and emphasis to speak the desired phrase.
In iOS 11 Apple engineers were working with a new female voice. They managed to record 20 hours of speech and to generate nearly 2 million audiosegment that were used for the training of automatic learning. Users who participated in the test noted a significant improvement in voice assistant in iOS 11, compared to iOS 9. At the end of the article there are examples of answers Siri on iOS 9, iOS 10 and iOS 11 so that you can evaluate the work of engineers.