Your Keyboard Can Compromise Your Data Privacy and Security

data privacy and security, ACAs, acoustic side-channel attacks

Researchers have developed a powerful deep-learning algorithm capable of compromising your data privacy and security via a microphone when keyboard keys are pressed.

  • Acoustic Side-Channel Attacks (ASCAs) pose a severe threat to data security as the algorithm can steal passwords and sensitive information with ease.
  • Traditional mitigations like silent keyboards or membrane-based keyboards proved ineffective against this attack.

Researchers from UK universities have developed a deep-learning algorithm capable of compromising your data privacy and security by analyzing the sound recorded via a microphone when keyboard keys are pressed.

The algorithm achieved an impressive 95% accuracy when processing smartphone recordings and a still-dangerous 93% accuracy when processing data recorded via Zoom.

The potential for acoustic side-channel attacks (ASCAs) using this algorithm poses a serious threat to data privacy and security, as passwords and other confidential information can be stolen with ease. Telegraphs and carrier pigeons are starting to look pretty good right about now.

The rise of machine learning technologies and the widespread availability of high-quality microphones have made sound-based attacks more viable compared to other methods. ASCAs are becoming increasingly dangerous, given their potential to breach sensitive information without requiring visual access to the keyboard.

To carry out the attack, the malicious actors need to record the sound of the victim’s keystrokes. And considering everything you own probably has a built-in microphone, that won’t be too hard. Alternatively, attackers can listen to keystrokes during a Zoom call as a participant.

The researchers collected training data by pressing 36 keys on a MacBook Pro 25 times each and recording the sound produced by each keystroke. Waveforms and spectrograms were created from the recordings to visualize discernible differences for each key. These spectrograms were used to train the deep learning model called ‘CoAtNet’ (Convolution and Self-Attention Networks).

According to the paper, the experiments showed that the CoAtNet classifier achieved an “accuracy of 95% on phone-recorded laptop keystrokes, representing improved results for classifiers not utilizing language models and the second-best accuracy seen across all surveyed literature. When implemented on the Zoom-recorded data, the method resulted in 93% accuracy, an improved result for classifiers using such applications as attack vectors.”

As a result, the researchers recommend users alter their typing styles, use complex randomized passwords, and employ software tools to add white noise or mimic keystroke sounds to safeguard their data privacy and security. However, they caution that even silent keyboards used in Apple laptops can be susceptible to this attack, making traditional mitigations like adding sound dampeners or switching to membrane-based keyboards ineffective. Where there is a will, there is a way, I guess.

Employing biometric authentication methods, such as fingerprint or face recognition, to enhance security against ACAs is paramount. But how long before those are no longer safe either?


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.