Fake news has been making headlines since the US presidential election, but it certainly did not start there. It was previously defined by Wikipedia as a type of “parody presented in a format typical of mainstream journalism, and called a satire because of its content” but with the prevalence of social media sites like Facebook becoming the mouthpiece of anyone with a view, fake news has taken on a more sinister side.
Fake news items can appear as hoaxes, propaganda, and disinformation to drive web traffic inflamed by social media. They are distinguished from news satire because they mislead and profit from readers believing the stories to be true or to influence unsuspecting readers.
The latter was claimed as having had a major influence on US voters and was inflamed by stories that claimed the fake news was propagated by foreign powers (mainly Russia), to discredit Hillary Clinton in the run-up to the election.
Sites like Facebook defend themselves by claiming they are not responsible for public commentary on their sites and should not be expected to censor inaccurate content, but CEO Mark Zuckerberg was forced to admit that “Facebook has been working on the issue of misinformation for a long time” calling the problem complex, both technically and philosophically challenging.
The telecommunications industry has been facing the same challenge for years trying to determine who on their networks are attempting to defraud them or are simply exhibiting unusual call patterns. Fraudsters have traditionally concentrated on high value products like international calling and premium numbers using numerous techniques, most if which can be recognised early by the latest fraud management systems like those WeDo Technologies provides.
Tomorrow’s fraudsters will not bother with items like international calls as they have been superseded by free calling apps like Messenger, WhatsApp and Skype. Premium number services have been largely replaced by apps and access to websites via mobile web browsers. But even the best detection systems, no matter how sophisticated, are designed to detect by looking constantly for abnormalities. Most have been experienced before and in the system’s database and others exhibit new patterns that make them stand out.
In many cases, just like the situation with fake news, human intervention and assessment are required to make a value judgement. And just like fake news, we will soon have systems that combine previous experience (machine learning) with cognitive and reasoning abilities (artificial intelligence).
Machines will eventually start thinking like fraudsters and maybe even pre-empt their activities by simulating the next fraud before it actually happens. We have teams that are already able to determine sentiment, what people are feeling and their moods, by analysing their social media activities. If this sentiment tracking is linked to the individual’s activities after the social media interaction, we should be able to build up a library of patterns that will help assess potential fraudsters before they even commit a fraud. Sound scary?
Yes, but this may be the only way to protect our businesses from attack in the future, because fraudsters won’t be thinking up ways to beat the system, their machines will be. It is highly likely that machine instigated fraud, already prevalent in botnets and hacking of personal details like credit card info, will take on a much higher profile and target base.
Machines could be transferring funds to themselves and other machines will be policing them – getting scarier isn’t it? Then, if machines are smart enough to foresee frauds they are probably smart enough to create them, too. And that is why there will probably have to be human intervention somewhere, just like in the challenges of tracking fake news.
Interesting times ahead.