Instagram’s newest anti-harassment feature explained
In the wake of recent racial attacks on social media platforms, Instagram is rolling out additional features to curb the spread of offensive and unwanted messages and comments.
Through the latest feature under the name “Limits,” the Facebook-owned platform will filter out abusive direct message requests, as well as warn users who try to post potentially offensive comments.
For posts that are receiving increased levels of attention, Instagram will limit the number of comments and direct messages from people seeking to target the user.
Endorsing the new feature, the social networking site used a recent example of the racial abuse witnessed across the platform after England’s defeat in the Euro 2020 Football men’s final. This incident, which saw fuming England fans posting racist slurs and emojis toward players, showcased how ill-equipped Instagram is when fending off attacks and online cyberbullying.
“We developed this feature because we heard that creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know. In many cases this is an outpouring of support, like if they go viral after winning an Olympic medal. But sometimes it can also mean an influx of unwanted comments or messages,” Instagram’s head Adam Mosseri said.
“Our research shows that negativity towards public figures comes from people who don’t actually follow them, or who have only recently followed them, and who simply pile on in the moment,” Mosseri added.
The feature will be available in the privacy settings and can be activated any time a user feels targeted with hateful content. Once activated, it will automatically hide comments and messages from those who do not follow the targeted user or have recently followed them.
Instagram’s Limits feature has been in the testing-phase since July but has become available widely available as of today.