Why messaging bots are a looming security threat

Messenger-bots
Messenger-bots

Bots are taking over our messaging apps. 

Last week, at its F8 developer conference, Facebook revealed the first wave of Bots for Messenger. These automated, interactive programs respond to natural language and allow users to shop, order food, read the news and get personalized weather forecasts — all without leaving the Messenger app.

Separately, messaging app Kik also revealed its bot store, while Slack and Telegram have been experimenting with bots for some time. Microsoft also made a big push for bots at its Build 2016 conference, introducing developer tools for creating bots for Skype and other Microsoft services.

SEE ALSO: Here's how bots work on Facebook Messenger

While bots are undoubtedly a big deal in tech right now, one area we've heard little about is security. That's because the issue of how these platforms will tackle it is still largely up in the air, even though bots could present a unique threat compared with typical malware and other malicious software, security experts say. 

Right now, most bots have a pretty narrow focus: you can order food or catch up on headlines or shop for a new pair of shoes — tasks you're likely used to completing through websites or apps. But, unlike the web, which often provides at least a few signals that an interaction is secure (for instance, the lock icon in your browser, the security certificate, or even simply the URL), there's no obvious way to tell a good bot from a bad bot. 

What's more, bots haven't been around long enough for users to be savvy enough to distinguish between those from legitimate sources and potential bad actors. Think of email phishing scams: While it's not uncommon for a scammer to send an email purporting to be from, say, your financial institution, most email software has gotten pretty good at flagging these types of messages so they're accompanied by a warning or go straight to your junk folder.

But there's no analogous mechanism for bots. Hypothetically, you could begin interacting with, say, a shopping bot and have no idea that it's a fake meant to steal your credit card info or other personal information. 

What's at risk 

While other types of bots have been closely followed within the security industry for years, the consumer-facing conversational bots used in Facebook Messenger, Slack, Kik, Telegram and other social apps are still new enough that they haven't been extensively studied. Still, just as mobile apps could have hidden malware, bots could pose a significant threat to users.

Since bots are embedded within the messaging and social apps we're already using, bots could be even better positioned to carry out certain exploits like the mining of personal data or harvesting login credentials, says Rami Essaid, CEO of Distil Networks, a security company that specializes in bot attacks.

"What’s potentially dangerous is they [bots] are being built into the app. Without a ton of scrutiny you could potentially have those trojan horses just built into apps, they don’t have to work from the outside in — they’re already in."

Essaid notes that while it's too early for this to be an immediate threat — we're only just starting to see the first wave of conversational bots hit social apps — platforms will need to step in to identify and expunge malicious bots.

"Once you can execute code on any platform, the world’s your oyster in terms of what you can and can’t do, and it’s just going to come down to the scrutiny of the Facebooks and these messenger apps to review and keep all of their apps clean," Essaid says.

So what about the platforms?

Complicating all this is that unlike the app stores, which are mostly kept in check by Apple and Google, it's up to each app to police the bots on their platform. And each app has different policies in place for how they deal with developers.

Some apps, like Telegram and Slack, are relatively open — just about any developer can cobble together a bot and make it available to other users. Facebook, on the other hand, is taking a more cautious approach. Messenger bots are still in beta so each bot is currently reviewed individually before its approved to roll out to all of Facebook's users.

In fact, Facebook says protecting users' security and privacy is one of the the bigger factors in why they are taking a slower approach.

"We have a lot of [security] policies and that's actually one of the main reasons we're rolling out slowly right now," Facebook's director of product management for Messenger Peter Martinazzi told Mashable. "It started as a beta program because we want to make sure we have the best ways to enforce violations as they come up." 

He declined to discuss details around how individual developers are vetted but noted the company is actively enforcing various platform policies around security to protect users. The company will also be watching closely to see how users interact with bots, he said. 

Image: Eric Risberg/AP

"There's a lot of user signals, like whether someone is marking something as spam whether they're blocking the bot, and all of that will help factor in how we'll monitor when things are behaving well and people are having a good experience."

Though careful scrutiny is undoubtedly a good thing, this approach is far from foolproof. Look at Apple, which, despite taking similar care in carefully reviewing apps before they're allowed into the App Store, still let dozens of malware-ridden apps slip in over the years. It seems almost inevitable that a shady bot could eventually get past even the strictest security policies once the messaging platforms begin to scale. 

How to protect yourself

Unfortunately, there isn't a foolproof way of making sure the bots you're using in messaging apps are only doing what they say they are — at least, not yet. "Think back to how long it took us to get any kind of malware detection on mobile devices," Essaid says. "There’s going to be a big window of time before any kind of antivirus comes out for these platforms."

In the meantime, he recommends users follow similar practices as they would in downloading mobile apps or other software. First off, make sure the bots your are using are coming from a trusted source. Second, don't forget to uninstall or deactivate the bots your are no longer actively using to minimize any potential risk. 

Certainly, the surest way of protecting yourself would be to ignore bots altogether, but that would also mean cutting yourself off from the considerable convenience they promise. The jury is still out on whether the current crop of messaging bots actually live up to the hype, but with huge investments from Microsoft, Facebook and (reportedly) Google, it seems likely they'll play an increasingly significant role in our digital lives.

Security has always been about finding a balance between convenience and protection. If bots truly do deliver on the former, does that inherently swing the pendulum away from the latter? Or is this simply the same security growing pains that every nascent platform encounters? For bots to truly take off, those questions might need answering sooner than later.