Nearly half of all web traffic is from “bots.” That statistic alone should grab your attention, but the one that should worry you more is that “bad” bots are growing in number and reach. They’re also getting smarter.
Cutting right to the chase: the only way bad bots can be contained is with good or other bad bots.
Good Bots, Bad Bots
So what are “bots?” A bot … “is an automated program that is programmed for certain actions and executes them either regularly or reactively. The bot does this without needing human activation. It analyzes the environment and ‘decides’ which actions to take depending on the situation.”
Bots are “good” and “bad.” Some good bots include (Luksza, 2018):
“Crawlers/Spiders (e.g. Googlebot, Yandex bot, Bingbot) – Used by search engines and online services to discover and index website content, making it easier for internet users to find it.
“Traders (Bitcoin trading bots) – Used by Ecommerce businesses to act like agents on behalf of humans, interacting with external systems to accomplish a specific transaction, moving data from one platform to another. Based on the given pricing criteria, they search for the best deals and then automatically buy or sell.
“Monitoring Bots (e.g. Pingdom, Keynote) – Monitor health system of the website, evaluate its accessibility, report on page load times & downtime duration, keeping it healthy and responsive.
“Feedfetcher/Informational Bots (e.g. Pinterest bot, Twitter bot) – Collect information from different websites to keep the users or subscribers up-to-date on the news, events or blog articles. They cover different forms of content fetching, from updating weather conditions to censoring language in comments and chat rooms.
“Chat Bots (e.g. Messenger, Slack, Xiaoice) – A service that enables interacting with a user via a chat interface regarding a number of things, ranging from functional to fun.”
Some bad bots include (also from Luksza, 2018):
“Impersonators – Designed to mimic human behavior to bypass the security and by following offsite commands, steal or bring down the website. This category also includes propaganda bots, used by countries to manipulate public opinion.
“Scrapers – Scrape and steal original content and relevant information. Often repost it on other websites. Scrapers can reverse-engineer pricing, product catalogues and business models or steal customers lists and email addresses for spam purposes.
“Spammers – Post phishing links and low-quality promotional content to lure visitors away from the website and ultimately drive traffic to the spammer’s website. Often use malware or black hat SEO techniques that may lead to blacklisting the infected site. A specific type of spammer is auto-refresh bots, which generate fake traffic.
“Click/Download Bots – intentionally interact or click on PPC and performance-based ads. Associated costs of such ads increase based on exposure to an ad – meaning the more people are reached, the more expensive they are.”
Bots are everywhere working 24/7 behind the scenes, though they’re not completely stealth actors. In fact, they’re discoverable. At the same time, they’re getting smarter. The marriage between bots, artificial intelligence (AI) and machine learning (ML) old news and is yielding smart children of all shapes and sizes. The good news? Smart bots are mostly good. But the bad news is that bad bots are getting really smart. Social bots, for example, have learned to lie with astonishing efficiency. For example, they influenced the Brexit vote and the 2016 US presidential election:
“In June 2016, the majority of British citizens decided to leave the EU. Prior to this, there were heated discussions on social networks – and it was noted that many social bots were also involved. The Independent reported that social bots played an important strategic role, especially when it came to voting ‘leave.’
“In November 2016, Donald Trump was elected the 58th President of the United States. There was a lot of information on how much influence social bots had on his narrow election victory. According to Oxford University, automated pro-Trump bots overwhelmed pro-Clinton messages. Apparently every third pro-Trump tweet was from a bot. There was also a fake news report that the Pope had recommended Trump for election and this was shared almost a million times – including by social bots. But the use of pro-Clinton social bots was also registered.”
What happens when social bots begin to learn and adapt? What happens when they understand every language? What happens when they cannot be fooled? What happens when they become emotionally intelligent? Or when bot development platforms enable the rapid development of bots who can understand, learn – and plot? (SAP already promises the development of intelligent bots in three minutes.) But it’s not the technologies that will, for example, make chatbots smarter or nastier, it’s how semantic parsing, automated planning and natural language understanding/ generation will make bots smarter, but about how these and other foundation technologies will enable the worst kind of bots – Freddie Kruegerbots – and what these bots will do.
Bad bots are multiplying. And they’re winning: the activity rate for bad bots is higher than good ones. Politicians understand winning and losing well, but it’s not at all clear they understand the war they’re in. Newsrooms too. What about Facebook and Twitter? Do they know they’re in a war? Or are they waging it? Kalev Leetaru answers the question: “despite myriad programs and policies designed on paper to fight abuse, in reality the platforms have done very little to curb the spread of hate speech, harassment and violent threats.” On October 23, 2019 Mark Zuckerberg, the CEO of Facebook, while testifying to the US Congress, made it clear that “lies” would not be identified or refuted by Facebook and Facebook would therefore not prevent politicians from lying on Facebook.
Since it’s unlikely that the social media platforms will seek and destroy bad bots, it’s up to others to protect the social world. According to Pedro: “everything from political elections to debates on social issues has almost certainly been affected by countless AI-armed ‘bots.’ It’s AI used for ill, spreading a blanket of disinformation that simply by weight of numbers has an effect on public consciousness. But now, the power of AI, in general, can and should be harnessed by the legions of human fact-checkers that otherwise might feel they’re fighting an unwinnable battle.”
Victories and losses will be determined by the number and capabilities of bots designed to seek and destroy opponents – which, according to Shelly Palmer, is easier than it may sound: “today, using open source software and some inexpensive cloud services, you can create AI-troll/bot combinations and release armies of them at extremely low cost.”
In the good bot space, efficiency and competitive response is essential. In the bad spot space, efficiency and self-defense is essential. Bots needs to know their competitors and adversaries. Continuous tracking and adaptation is necessary. AI and machine learning will enable good and bad bots, but everyone must fight.
The bot battlefield will play out competitively in crawlers, traders, monitors, chatters, scrapers, spammers and impersonators. Good versus evil? To some extent, yes, and the only ones that can win this war are the bots themselves. Alternatively, there are some who believe that regulatory reform is right around the corner, but based on recent Congressional testimony and ongoing legislative paralysis, it’s hard to see when any meaningful bot regulations will appear. But if there’s no legislative remedy, then what? It’s bots versus bots. May the best bots win.