Social media giant Twitter (TWTR) is expanding its “Safety Mode” feature that enables users to temporarily block accounts that send abusive or offensive tweets.
Going forward, the system will flag accounts that use hateful remarks, or those bombarding people with uninvited comments, and block them for up to seven days.
Twitter users in the United Kingdom, U.S., Canada, Australia, New Zealand and Ireland will have access to the expanded safety features, the company said.
Additionally, users can also now use a companion feature called “Proactive Safety Mode” that
identifies potentially harmful replies and prompts people to consider enabling the safety mode.
The Safety Mode feature can be turned on in settings, and the system will assess both the tweet’s content and the relationship between the tweet author and replier.
Twitter said it will collect more insights on how the feature is working and potentially make additional changes in coming months.
Twitter faces greater scrutiny from regulators. In January, a French court ruled that Twitter must show exactly how it combats online attacks, while the United Kingdom is preparing legislation to force all social media sites to act swiftly to address hate speech or face financial penalties.