In the early morning of September 9, 2016, Bill Moore, CEO of the Austin-based walkie-talkie app company Zello, contacted the Middle East Media Research Institute. He was seeking a copy of a report MEMRI had recently published describing how ISIS members and supporters were using Zello, which allows people to send voice messages to each other in private and also public channels. Moore had learned about the findings through a Google Alert.
“Can you share a copy of the report explaining ISIS uses Zello? I’m the CEO of Zello,” the message read, according to emails reviewed by WIRED and confirmed by Moore and MEMRI.
Hours later, MEMRI deputy director Elliot Zweig sent him the report. While MEMRI hadn’t collected actual messages, its findings included screenshots of Zello users whose avatars featured photos of ISIS’s iconic black flags, and public channels with names like “Islamic State Channel.” One channel called simply “Jihad.” described itself this way: “For the Brothers who desire to be with Mujahideen & to talk about Jihad and Islam.” Some of the channels had been advertised by ISIS sympathizers on another encrypted app, Telegram. The list wasn’t comprehensive, and didn’t reference any specifically troubling conversations; it was just a snapshot of accounts and channels MEMRI found easily at the time.
In his response, Zweig offered to connect Moore with the executive director of MEMRI, Steven Stalinsky. When he heard nothing back, Zweig extended the offer again. This time, Moore replied: “Confirming receipt, thank you very much. No need from our side to discuss now.”
A few weeks later, Zweig tried one more time. “We note that the ISIS and other jihadi accounts mentioned in our report are still active on your service, and would like to again offer the opportunity for a briefing/conversation with our executive director,” he wrote.
Moore never got back in touch—that is, until just last week, when MEMRI published yet another report showing that nearly all of the channels it flagged in 2016 were still live.
‘We need to do a better job.’
Zello CEO Bill Moore
Over those 18 months, Zello’s audience has grown dramatically. The app, which launched in 2011 and now boasts 124 million users worldwide, briefly topped the US App Store last fall when hurricane victims in Texas and Florida used it to communicate with rescuers. In a single week last September, Zello accrued six million new users. The company has also scaled its paid, enterprise product, called ZelloWork, which is used by major hotel chains and retailers.
Unlike other chat apps, including WhatsApp and Telegram, Zello is almost entirely voice-driven. Users can join public channels about a given topic to hear what other members are saying, sort of like a two-way radio. Or they can create private, encrypted channels with individuals or groups. Zello doesn’t retain a record of any public voice messages. But those voice messages do get saved onto the phones of other users who have heard them.
Over the years, Zello has also found itself, like so many tech platforms, associated with terrorist activity. Last April, a man who killed five people and injured 14 others in Stockholm by driving a truck into a crowd reportedly used Zello to discuss his plot before, during, and after the attack. Meanwhile, Moore says the company has received subpoenas from law enforcement seeking to monitor terror suspects on Zello. And then there are the MEMRI reports.
Despite these signals, the Austin-based company has apparently taken a largely passive stance toward policing terrorist-affiliated public channels and accounts. Even as larger tech companies like Facebook, YouTube, and Twitter have faced years of congressional interrogation and public pressure over how they filter similar content, Zello has dedicated limited resources to this type of moderation. The company relies entirely on users to flag problems, an action that doesn’t always guarantee results.
That hardly makes it unique among encrypted messaging apps. The swarms of terrorist activity on Telegram are by now well documented. But MEMRI’s Stalinsky says Zello’s inaction even in the face of clearly presented evidence is cause for concern.
“If someone is posting an ISIS avatar or the account is called Islamic State, that should have been picked up right away,” Stalinsky says. “There’s no excuse.”
Moore says that among Zello’s 25 employees, only “a couple” of part-time staffers are responsible for moderating accounts and channels. Those employees shut down accounts that are flagged by other users, and no one at Zello speaks Arabic. Currently, the company filters words like “jihad” from appearing in the Trending Channels section, which surfaces the most popular channels. But anyone can search for terror-related terms and find dozens of public channels in the results.
“I agree it needs more focus. We need to do a better job,” Moore says of the strategy’s shortcomings.
Moore says he didn’t seek MEMRI’s insight back in 2016 because he wasn’t familiar with the organization, and believed the report to be sensationalized. “We dismissed their initial report. That was a mistake,” he says, noting that the company has since asked for MEMRI’s assistance in finding problematic content in Arabic.
MEMRI was founded in 1998, and has researched cyber jihad for about a decade. It’s also not without its critics. The group has been accused by the Council on American-Islamic Relations of being selectively biased against Islam, and mis-translating Arabic content to bolster its point of view. But WIRED independently searched Zello for accounts and channels with overt terrorist language and imagery, and found results similar to those MEMRI reported.
Moore says Zello did shut down two of the channels MEMRI flagged after that first report, including one called Vilayat Kavkaz—the name of a branch of ISIS—which had already been flagged 10 times. But other channels with nearly identical names and ISIS imagery have since popped up in their place. Zello has not reported these account and channels to law enforcement. Moore says the company does comply with law enforcement when subpoenaed, though, including requests to keep certain suspicious channels open for monitoring purposes. Moore could not comment on which law enforcement agencies made these requests, or which channels they are monitoring, though the Federal Bureau of Investigation typically handles domestic terrorism cases. The FBI declined WIRED’s request for comment.
But Moore acknowledges that not all of the terrorist-affiliated accounts and channels that remain on Zello remain there because of subpoenas. The channel “Jihad.” remained on the app until Thursday afternoon, though Moore says it had been reported to Zello by a user four years prior, and was included in MEMRI’s report. A simple search for “jihad” or “Islamic State” on Zello still yields dozens of channel results. Though Moore says many of those channels are “dead,” meaning there were no messages sent in them in months or years, Zello doesn’t retain records of the last time a channel was used.
“They should be shut down, and they weren’t, and that’s a problem,” Moore says.
Still, from Moore’s perspective, these dormant accounts are more of a public relations issue than a national security threat. They’re not being used regularly, he says, and channels don’t retain voice messages people have left in the past. That means new subscribers couldn’t listen back to a channel’s old content. “It’s sort of like if a tree falls in the forest and no one’s there to hear it,” Moore says.
Moore also takes issue with MEMRI having based its research on channel and account names rather than actual conversations taking place within those channels. “It’s as if you click a video link to find ‘video deleted,’ there is nothing there but the name,” Moore says. “Likely there are other channels that are active and include criminal activity without an obvious name or description. These may not be public channels, so like a cell phone call on Verizon, no one is listening unless law enforcement has discovered a source likely outside of Zello.”
And yet, Moore’s video analogy isn’t exactly apt. A deleted video exists in isolation. But Stalinksy argues that by allowing terror-related Zello channels with hundreds of subscribers to continue to exist, Zello leaves open a window for even dormant channels to become active again. Since MEMRI’s initial 2016 report, additional channels related to terrorism have popped up, too. “The ones we highlighted aren’t the only ones by any means,” Stalinsky says.
Moore acknowledges that filtering accounts with an ISIS flag avatar, for instance, or those that openly advertise jihad in the channel description, makes sense. “We should do that whether they’re used or not used, because it’s pretty bad optics,” he said. This week, following inquiries from WIRED, Zello banned eight of the channels MEMRI initially spotted.
For Mary McCord, a Georgetown Law professor and a former United States principal deputy assistant attorney general, Zello’s justification sounds eerily familiar. “We heard similar things from Twitter and other platforms back in 2014,” she says. At the time, these platforms also relied on users to flag problematic content. “We heard, ‘We don’t have our own staff of employees who are monitoring what’s being publicly posted on our platform to see if it violates terms of service,'” McCord says.
A dramatic shift has taken place among technology giants since then. According to one former counterterrorism official at the Department of Justice, who requested to speak anonymously, that’s largely because terrorist tactics have changed. Whereas Al Qaeda tended to leave behind a long trail of digital plotting, ISIS has typically delegated attacks to new recruits who become radicalized online. That forced both intelligence agencies and tech companies to change their approach. “The strategy became: This violates your terms of service. Get it down,” explains the official. “Make it as hard as you can for that young person to reach this group.”
Companies like Facebook, Youtube, and Twitter now stamp out terrorist content through a combination of keyword filtering, image recognition, and human moderation. But other, much smaller entities have also proactively adopted massive content moderation teams. Bumble, the dating app, recently announced it is deploying 5,000 moderators to remove photos of guns—among other things—from its platform.
‘If someone is posting an ISIS avatar or the account is called Islamic State, that should have been picked up right away.’
Steven Stalinsky, MEMRI
Tech giants have also formed a group called The Global Internet Forum to Counter Terrorism, through which they share strategies to prevent the spread of these messages. As part of the partnership, they’ve created a database of known terrorist content, which member companies can proactively ban on their platforms. This approach is far from perfect, but it’s a start. Part of the group’s professed goal is to work with smaller companies and hold workshops to share what they’ve learned.
Moore says Zello is now looking into working with this group. The company is also preparing to roll out a new feature that would require users to provide a phone number in order to speak in a public channel. The goal is to dissuade bad actors by forcing them to provide personally identifying information.
Zello does face a higher hurdle than some other apps; because it’s a live, voice-based app, Zello can’t monitor what people say using automation the way, say, Facebook does. Zello users often also flag content as a way to retaliate against others on the platform, rather than reporting actual abuses of the system. That makes it difficult to decide just how seriously to take any given report. Moore also notes that the company has to prioritize among a wide variety of potentially illegal activities on its platform.
Meanwhile, Zello doesn’t monitor what goes on in encrypted, private channels. Even Stalinsky acknowledges it would be nearly impossible to keep track of all of that conversation in every language around the world.
But until now, Zello has shown little initiative in doing even the easy part. At a time when tech companies are continually being called upon to answer for the misdeeds of their users, whether they’re terrorists, white supremacists, or Russian trolls, McCord says there are few excuses for failing to act.
“When you decide this is the business you want to get into, and you provide a platform that you have knowledge is being misused by terrrorists and would be terrrorists, then you have an obligation,” McCord says. “This whole, ‘We’re small. We don’t have the bandwidth thing,’ you should have thought about that when you were setting up the company.”