Instagram is to warn users when their captions on a photo or video could be considered offensive.
The Facebook-owned company says it has trained an AI system to detect offensive captions.The idea is to give users â€œa chance to pause and reconsider their words.â€
Instagram announced the feature in a blog on Monday, saying it would be rolled out immediately to some countries.
The tool is designed to help combat online bullying, which has become a major problem for platforms such as Instagram, YouTube, and Facebook.
Instagram was ranked as the worst online platform in a cyber-bullying study in July 2017.
If a user with access to the tool types an offensive caption on Instagram, they will receive a prompt informing them it is similar to others reported for bullying.
Users will then be given the option to edit their caption before it is published.â€œIn addition to limiting the reach of bullying, this warning helps educate people on what we donâ€™t allow on Instagram and when an account may be at risk of breaking our rules,â€ Instagram wrote in the post.
Earlier this year, Instagram launched a similar feature that notified people when their comments on other peopleâ€™s Instagram posts could be considered offensive.
â€œResults have been promising and weâ€™ve found that these types of nudges can encourage people to reconsider their words when given a chance,â€ Instagram wrote.
Chris Stokel-Walker, internet culture writer and author of the book YouTubers, told the BBC News the feature was part of a broader move by Instagram to be more aware of the wellbeing of its users.
â€œFrom cracking down on promoting images of self-harm, to hiding â€˜likesâ€™ so people outwardly are less likely to equate their self-worth with how many people press â€˜likeâ€™ on their photos, the app has been making moves to try and roll back some of the more damaging changes itâ€™s had on society,â€ he said.