All-Hazards Social Media Plan Should Address Fake Accounts
They continued popping up and we couldn’t stop them.
A few weeks ago, I participated in a unique emergency exercise sponsored by the MITRE Corporation. The sole premise tested the idea that social media could directly affect the emergency response. After five days of the actual experiment (and countless hours of testing and meetings) many lessons were observed. The excellent staff at MITRE will publish an official report in a few weeks, to which I’ll defer for general observations and results.
I’ve participated in my fair share of emergency exercises during the last nine years. Usually, there’s a good script that would model what could likely happen.
During this exercise/experiment, though, 200 university students served as the public providing the real life mojo and reactions of the mock incidents.
On Day 3, a boatload of whack-a-mole accounts started tweeting. These fake accounts looked and smelled like the real government and university entities in the exercise emergency operations center. They began a heavy assault of false information designed to confuse the students in the field.
For example, one account clearly tried to lure students to a building that was evacuated to perhaps a dangerous situation that would inflict more harm if people returned. Another mole contradicted what the official university account was describing.
And yet another account began the day providing good information, but then turned “evil.”
In the EOC, we noticed this trend. My local perspective pushed me, along with the federal and university PIOs in the room, to decide to firmly establish who was real (us!) and who was not. Frankly, we actively called out the liars. This was new land for us, so at first we were skittish, but it was important to the safety of students.
These tweets peppered students for a few hours with avatars of our organizations and similar-sounding and official-looking usernames. As we continued our response, the students joined us and began to use hashtags to identify imposters — hashtags such as #falseinfo.
As usually happens in social media, the self-correcting nature began to work (of course this was a controlled environment, so we could only go so far).
This fake account phenomenon isn’t fresh, of course. There’s the well-documented Shell Oil imposters using @ShellisPrepared, among other examples.
As digital PIOs in this evolving era of shared communications, how strongly have we designed plans to counter potential fake accounts, especially during emergencies? Yes, some companies like Facebook and Twitter will eventually take down fake accounts if notified and justified if we have a plan in place to contact them, but we need to be ready for whack-a-moles that could appear to deliberately alter the course of emergency response or tatter the brand of an organization. Rumors are one thing to look for and respond to, but this was a different experience.
I’m not smart enough to know a lot about cybersecurity and the various threats that may be coming down the Intertubes, but this piece of the experiment I experienced sure seemed like a form of cybermeddling at the least.
The whack-a-moles continued popping up and we eventually beat them down, but it compelled us to completely change our thinking and adapt, which is what digital PIOs must do. Did we do it perfectly? No, but it was an emergency experiment and we had some latitude. This exercise was a terrific reminder to develop something akin to an all-hazards social media plan to address all aspects of how established and emerging tools can be used by organizations and the public.