Behaviour Ratings for Twitter Users?

Twitter Marketing Spam Account

I touched on this over at Planet Botch in my New Profile Page assessment, but I thought it was worth exploring the concept in more depth here. Essentially, I concluded in the previous article that Twitter has been failing to grow adequately not because the interface needed improvement, but because spamming and site abuse is out of control, and there’s a real problem with visibility for new users. Newcomers to Twitter are literally being obliterated from view by old lags who use automation to hammer the site with low- or zero-value messages.

No one has to follow these spammers, obviously, but since they can so easily dominate the chronologically-based search and hashtag timelines, they’re drowning out those who post discerningly and who actually are worth following. That in turn leads to high value users facing increasing levels of obscurity, and in a nutshell, losing interest because they can’t get their voices heard without resorting to spam tactics themselves.

Twitter has rules against using automation, but it doesn’t take them in any way seriously, and countless accounts which are clearly bots posting exactly the same single message every few minutes, ad infinitum, are allowed to persist even after being reported. The image above shows an excruciating spam account, which has auto-posted a staggering 276,000 Tweets at an average of about 300 per day. You’d think that alone would trigger some sort of flag within Twitter, but it doesn’t, and neither does blocking and reporting the account, or the fact that the Tweets are stolen and re-posted unattributed.

So if Twitter won’t act of its own accord to stamp out the spamming that’s killing measured use and leading the site ever closer to meltdown, could it at least provide the tools to better and more easily assess spam accounts, and some filters which would enable conscientious users to bypass or eliminate spammers from view?


The provision of behaviour ratings, and a filter system which allows users to essentially block accounts with undesirable ratings BEFORE they even become visible, would be an incredibly effective way to improve user behaviour on Twitter. Almost all of the bots and automated apps currently in circulation would be rendered virtually useless overnight. Here’s what a user rating readout might look like…

Twitter User Ratings

  • Interactivity: 63%
  • Total Tweets per day: 2
  • @replies per day: 0.5
  • Unfollow rate: 12%
  • Repetitiveness: 0%
  • DM propensity: 0%
  • Account quality: 70%

And here’s a breakdown of what the ratings mean…


The number of straight @messages the user replies to, as a percentage. If the user receives 10 @messages from other users, and replies to 5 of them, he or she will have an Interactivity score of 50%. @mentions and DMs would not be counted. This would very quickly reveal entirely automated accounts, as they never reply, and would thus have an Interactivity score of 0%.


An average of the number of Tweets the user posts per day, including replies. In order to ensure that ‘spam and purge’ techniques don’t slip through the net, Tweets per day should be counted purely based on the number posted, and not affected by deletions. In other words, if I post 100 Tweets per day, and then delete 99 of them, my Tweets per day score will still read 100, and not 1. Low-value accounts will tend to have a high number.


The average number of @replies the user posts per day. This, in conjunction with the above score, would help illustrate how much of a user’s dialogue is talk and how much is ‘publishing’.


The percentage of accounts the user follows, and then unfollows. Bots and low-value accounts will tend to have high percentages, but this would also be a great insight into personal, commercial and even political accounts that look like they have loads of fans, but have in reality built their followings by tricking people – following, waiting for a followback, and then strategically and surreptitiously unfollowing at a later stage. An account following just 50 people, but which has 10,000 followers, looks impressively popular. Until you discover that the account has an unfollow rate of 99.9%. Then it just looks like it’s run by a devious, scheming arsehole – which of course it is.


The amount of Tweets posted which are duplicates of the user’s previous Tweets. @usernames could be excluded from the matching process, so anyone repetitively sending exactly the same message to different account usernames (i.e. spamming) would still get a high percentage. This would be a really effective rating. Most good users would get a very low percentage indeed. Many bots and low-value users would be prone to very high percentages, but even a score of 20% would be cause for concern in my opinion.


The percentage of followers the user approaches with Direct Message contact. This would exclude replies to DMs, and only be based on proactive DM contact. This is the most controversial of the ratings because DMs are of course private. But realistically, if a user has 25,000 followers and has DM’d every single one of them, they’re obviously a spammer using automation, and potential followers have a right to be warned about that. None of the DM content would be divulged. Just the user’s propensity to use the function.


This would be an overall score, useful for those newer to Twitter, who don’t really want to learn about or spend time dissecting the other ratings. It would combine elements of the other ratings. How communicative the user is, whether they tend to follow out of real interest or just for their own gain, whether they’re likely to completely take over a timeline, how repetitive they are, and whether they’re prone to DM spamming. Twitter could very easily base this score on criteria which would rank bots and low-value accounts as low, and rank real, discerning users as high.


The application of an automatic block feature on Twitter shuts out spammers before they even become visible. The idea would be to allow users, if they wish, to set an Account Quality threshold, and automatically block or ignore any account whose score falls below the threshold set. So if I set an Account Quality threshold of 10%, I won’t, in my keyword or hashtag searches, see any account with a lower score, and such accounts can’t follow me. A feature like this would change behaviour very quickly, because it would make accounts with very low scores completely invisible to discerning users. In order to compete for visibility and gain followers, spammers would have to behave better.


There would always be people finding new ways around a system like this, and a few unfortunate exceptions getting poor ratings when they’re not deliberately doing anything untoward. But the system would improve behaviour, and improve Twitter. And best of all, it would do so without Twitter having to constantly keep an eye out for spammers and suspend them. If spam accounts effectively had “SPAMMER!” branded across them through a series of behaviour scores, then users could do their own policing.

Let me block, by default, repetitive accounts who never reply but deluge the site with hundreds of promos per day and spam all their followers with auto DMs, and I’ll do it. So, I suspect, will most other Twitter users, and that hits spam somewhere Twitter can’t currently hit it. Suspend an account and it comes back ten minutes later with a new name and email address. But make it untenable for that account to operate, and you either improve its behaviour, or shut it down.

Author: Bob Leggitt