At this year’s DLD conference Facebook’s and Google’s algorithms, which have been abused by politically biased spammers, were the subject of several debates. Facebook’s Elliot Schrage explained what they plan to do to solve this problem, which the founder of the social network had recognized some days ago, while analyst Scott Galloway accused Facebook of just pursuing the maximum profit and asked politicians to split the company.
These are some of the possibilities that were announced that could be done to avoid wrong uses of Facebook’s and Google’s algorithms:
– Fact-checking tools that look at several data to be sure that a news is real. Looking at clicks to see if users get back quickly to the social network was admitted to be one of them. Google has already said it’s using it.
– Removing news that are clearly published on Facebook just to collect users which are later monetised thru advertising. It seems as if Facebook plans to ban ads from those sources that are not reliable if they are clearly pursuing a profit.
– Reputation tools for ranking news and publishers with the help of the community. Facebook is afraid of doing this by itself, as it could be accused of favoring its main advertisers or certain political ideas. Besides tools as Alianzo Rank, which use engagement data from social media, legal information from websites and from the real world, such as Wikipedia, could be used. Trusted resources will be decided by the community, assured Facebook’s representative.
– Identifying real local, family and niche communities, in order to see what they are sharing and specially which user accounts do not belong to any of them. This would be a clear sign that these accounts are fake. As is known, Twitter is banning these types of accounts and is more trustful of those users that do not follow too many users. In fact, Facebook has already admitted it’s going to promote local news. A publisher will be considered as local if its links are clicked on by readers in a tight geographic area.
– Educating. This is more a societal task and governments and schools should understant that digital literacy is required for people to distinguish what’s real and unreal on social media. Social media should also be required to provide this education and explain how they are taking the decision of what is placed on people’s newsfeeds.
– Providing a better service for those users that report posts and users on social media. I have reported many times fake news on Facebook and never have gotten an answer from the social network. It should work in a similar (and more transparent) way as Digg does with negative votes that can ban a news.
– More importance will be given to content that people comment. As of now, shares and likes were the main interactions of users both on Twitter and Facebook. From now on, Facebook is saying it will penalize “passive content” which does not get real people interactions (that means comments).