in

New patent shows that Apple has developed sound wave sensors to verify the user’s voice

New research shows that Apple is studying how to make Siri detect specific sounds and determine their location only through vibration. Two newly exposed patent applications show that Apple is studying different ways of detecting people or interacting with people. The most important one is to allow Siri to recognize individual people and their spoken commands. The device does not require a conventional microphone.

New patent shows that Apple has developed sound wave sensors to verify the user's voice

Apple’s project is called “Self-Mixing Interferometric Sensors Used to Perceive the Vibration of the External Surface Structure or Housing Parts of Devices”, which involves the use of Self-Mixing Interferometry (SMI). SMI involves the device detecting the signal generated by the reflection or backscatter of emitted light. Apple stated in the patent that with the improvement and wider use of voice recognition, the microphone is becoming more and more important as an input device for interacting with the device. In traditional microphones, sound waves are converted into sound waves on the microphone membrane. Vibration, which requires a port for air to enter and exit the device under the microphone. This port can make the device vulnerable to water damage, blockage, and moisture, and can interfere with the appearance.

New patent shows that Apple has developed sound wave sensors to verify the user's voice

Therefore, due to higher sensitivity, Apple recommends using SMI sensor arrays. SMI sensors can sense vibrations caused by sound and/or hitting the surface. Unlike traditional diaphragm-based microphones, SMI sensors can work in a closed (or sealed) environment. The details in the patent show how the SMI sensor can also be used on the back of the Apple Watch. The device can be configured to sense one or more types of parameters, such as but not limited to vibration; light; touch; force; heat; motion; relative motion; user’s biometric data (such as biometric parameters); air quality; proximity ; Location; Connectivity; etc.

New patent shows that Apple has developed sound wave sensors to verify the user's voice

Apple introduced how devices such as Apple Watch use this patent to determine their location and nearby things. For example, after identifying the specific human voice carried in the vibration waveform, the electronic display screen can be transitioned from a low-power or no-power state to a working power state. So you can walk into the living room and ask your watch to turn on the TV. Even if the watch does not have a traditional microphone, it will recognize a spoken command. It will also specifically identify you. Knowing that you are authorized to use the TV and knowing which TV is nearby, the device can turn on that TV.

Apple’s suggestion is to mix different methods to detect user requests, and even calculate the probability of the vibration coming from people. Such a device, whether it is a wearable device or a static device like an Apple TV, will determine that the source of the vibration waveform is likely to be a person. It does this based on the information contained in the vibration waveform, which includes a determined source direction or distance. This information will also include any changes in position, such as footsteps indicating that a person is moving to a predetermined viewing or listening position.

Share this: