MANILA – Inauthentic accounts, spam, and malicious automation disrupt everyone’s experience on Twitter. To showcase its to commitment to create a healthier platform for everyone – Twitter introduced new measures to fight abuse and trolls, new policies on hateful conduct and violent extremism, and are bringing in new technology and staff to fight spam and abuse.
In May 2018, our systems identified and challenged more than 9.9 million potentially spammy or automated accounts per week. That’s up from 6.4 million in December 2017, and 3.2 million in September.https://t.co/eYPeSqsN89
— Twitter Philippines (@TwitterPH) June 27, 2018
To date, the company has made a lot of investments in this space resulting to positive impacts:
● In May 2018, Twitter’s systems identified and challenged more than 9.9 million potentially spammy or automated accounts per week. That’s up from 6.4 million in December 2017, and 3.2 million in September.
● Twitter is removing 214% more accounts for violating Twitter spam policies on a year-on-year basis.
● At the same time, the average number of spam reports received continued to drop — from an average of approximately 25,000 per day in March, to approximately 17,000 per day in May. A 10% drop in spam reports from search as a result of recent changes. These mean people are encountering less spam in their timeline, search, and across the Twitter product.
● In Q1 2018, Twitter suspended more than 142,000 applications in violation of their rules — collectively responsible for more than 130 million low-quality, spammy tweets as Twitter maintained this pace of proactive action, removing an average of more than 49,000 malicious applications per month in April and May. Twitter is increasingly using automated and proactive detection methods to find misuses of their platform before they impact anyone’s experience. More than half of the applications they suspended in Q1 were suspended within one week of registration, many within hours. These are proofs that Twitter is working to catch and prevent these activities before anyone can see it.
The company can tackle attempts to manipulate conversations at scale, across languages and time zones, without relying on reactive reports. With that, Twitter has four new processes in fighting spam and malicious automation aside from developing machine learning tools.
1. Reducing the visibility of suspicious accounts in Tweet and account metrics
A common form of spammy and automated behavior is following accounts in coordinated, bulk ways. Often accounts engaged in these activities are successfully caught by an automated detection tools (and removed from the platform’s active user metrics) shortly after the behavior begins. Twitter has started updating account metrics in near-real time: for example, the number of followers an account has, or the number of likes or Retweets a Tweet receives, will be correctly updated when Twitter take action on accounts.
When an account behaves suspiciously, it will be put into a read-only state where it can’t engage with others or Tweet. Follower figures and engagement counts will be remove until it passes a challenge like confirming a phone number. People can see a display warning on read-only accounts and prevent new accounts to follow them to avoid inadvertent exposure to potentially malicious content. After the challenge, it will take hours for the account to be restored. These makes protections more transparent to anyone who may try to interact with an account in a read-only state. People may notice improvements on account metrics regularly in the display of Tweet and account information to ensure that malicious actors aren’t able to artificially boost an account’s credibility permanently by inflating metrics like the number of followers.
2. Improving the signup process
To make it harder to register spam accounts, Twitter will require new accounts to confirm either an email address or phone number when they sign up to defend against people who try to take advantage of Twitter’s openness. Twitter is working closely with their Trust & Safety Council and other expert NGOs to ensure this change does not hurt someone in a high-risk environment where anonymity is important. This may roll out later this year.
3. Auditing existing accounts for signs of automated signup
Twitter is conducting an audit to secure a number of legacy systems used to create accounts to ensure that every account created on Twitter has passed some simple, automatic security checks designed to prevent automated signups. The new protections as a result of the audit which helped prevent more than 50,000 spammy signups per day.
Now, Twitter is taking action to challenge a large number of suspected spam accounts that they caught as part of an investigation into misuse of an old part of the signup flow. These accounts are primarily follow spammers who have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during the signup flow. Some people may see their follower counts drop; This does not mean accounts appearing to lose followers did anything wrong; they were the targets of spam that Twitter is cleaning up. Twitter is taking more steps to clean up spam and automated activity and close the loopholes exploited.
4. Expansion of Twitter’s malicious behavior detection systems
Twitter is now automating some processes in suspicious account activity, like exceptionally high-volume tweeting with the same hashtag, or using the same @handle without a reply from the account they are mentioning. These tests vary in intensity, and at a simple level may involve the account owner completing simple reCAPTCHA process or a password reset request. More complex cases are automatically passed to Twitter for review.
What Users Can Do
There are important steps users can take to protect their security on Twitter:
● Enable two-factor authentication. Instead of only entering a password to log in, Enter a code which is sent to a mobile phone. This verification helps make sure that only the owner can access their account.
● Don’t re-use passwords across multiple platforms or websites. Have a unique password for each accounts.
● Use a FIDO Universal 2nd Factor (U2F) security key for login verification when signing into Twitter.
Additionally, if you believe you may have been incorrectly actioned by one of Twitter’s automated spam detection systems, you can use the appeals process to request review of your case.
Twitter is continuing to invest across the board in their approach to these issues, including leveraging machine learning technology and partnerships with third parties. They’re looking forward to soon announcing the results of their Request for Proposals for public health metrics research. These issues are felt around the world, from elections to emergency events and high-profile public conversations. In Twitter’s recent announcements, the public health of the conversation on Twitter is a critical metric by which they will measure the success in these areas.