Digital Jaws: How AI-Generated Wildlife Content Is Reshaping Public Expectations and Creating Real Safety Risks for People and Facilities.
Recently, a skier in northern China spotted a snow leopard near her resort. Snow leopards are among the most elusive animals on Earth, and fewer than 6,500 exist in the wild. They live in some of the harshest mountain terrain on the planet, and they avoid humans so consistently that a 2020 study of 261 herders who share habitat with these cats found that, while many had seen snow leopards and had lost livestock to them, not one reported an attack on a human. Biologists agree that there have been no verified cases of a snow leopard killing a person.
The skier got out of her vehicle when she saw the snow leopard. She spent roughly ten minutes approaching the animal, closing the distance to about three meters. She wanted a photograph, but the snow leopard mauled her. A ski instructor who saw the incident happening drove it off with his poles. She survived, thankfully due to her helmet, but footage of the aftermath, the woman lying motionless in the snow, the leopard sitting nearby, circulated on social media within hours.
Here’s the question that matters for anyone who works with animals or cares about public safety: Where did this woman get the idea that approaching a wild predator for a selfie was reasonable? What shaped her expectations so thoroughly that she spent ten minutes walking toward an animal that every piece of scientific literature describes as dangerous and reclusive?
The answer is fifty years old, and it’s about to get much worse.
In 1975, a movie about a shark changed how an entire generation related to the ocean.
Jaws wasn’t a documentary. Steven Spielberg built a mechanical shark that barely functioned, the crew nicknamed it “Bruce”, and its constant breakdowns forced Spielberg to suggest the shark rather than show it. The result was cinema’s first summer blockbuster and, as marine researchers have documented for five decades since, a fundamental shift in human behavior toward sharks.
Shark-fishing tournaments exploded in popularity after the film’s release. George Burgess, who spent decades directing a shark research program in Florida, described it as a “collective testosterone rush” of recreational hunters suddenly determined to land their own monster. Populations of large sharks along the eastern coast of North America dropped by more than half in the decades following the film. Spielberg himself told the BBC in 2022 that he regrets the decimation his movie helped cause. One fictional shark, shown for maybe four minutes of screen time, rewired public perception so thoroughly that we’re still measuring the damage half a century later.
The skier in China wasn’t thinking about Jaws. But she was operating on the same principle: her expectations of how an animal would behave had been shaped by media, not reality. Somewhere in the accumulated hours of content, she’d consumed videos of big cats acting docile, wildlife “encounters” that looked magical, animals appearing to cooperate with humans for the camera, she learned that this was possible. That a snow leopard might hold still for her photo or even interact with her peacefully.
That expectation nearly killed her.
Now consider what happens when the content that shapes these expectations isn’t limited to Hollywood films and viral clips captured by luck. When anyone with a laptop can generate photorealistic videos of animals doing things they would never do. When the volume of that content increases every single day, and the quality improves every few months, and the algorithms that control what people see are optimized for emotional engagement rather than accuracy. That’s not a hypothetical future. That’s the world zoos and aquariums are operating in right now.
Chelsea James published a piece through Sedgwick County Zoo in late 2024 titled “The Reality of AI Animal Content.” When she discussed it on the Rossifari Zoo News podcast with Jon Rossi, the response from the zoo community was immediate. As one colleague put it: “This is what I’ve been trying to say.”
What Chelsea documented was straightforward: AI-generated animal videos are flooding social media, and they’re becoming impossible to distinguish from authentic footage. Bears bouncing on trampolines, apes caught on doorbell cameras, gorillas smashing through glass, and wild animals cuddling with children in ways that would never happen safely. Each videois racking up millions of views, shared by people who genuinely believe they’re celebrating wildlife.
The technology that makes this possible has become accessible to anyone with an internet connection. What took Hollywood studios and professional effects teams months or even years to create can now be generated in seconds. The quality improves constantly, and the volume increases every day.
For those of us who work in operational risk and security, this represents something more than a curiosity or a nuisance. It represents a category of threat that most facilities haven’t planned for, one that affects the safety of employees, animals, and visitors in ways that might not be obvious until you’re already in the middle of a crisis.
Before we talk about what to do, we need to understand why these videos work so well on human brains. This matters because it explains why smart, well-meaning people make dangerous decisions about animals and why telling them to “be careful” isn’t enough.
Visual processing happens fast. When you see an image, any image, your brain’s emotional centers respond before your analytical thinking has a chance to engage. This isn’t a flaw; it’s how humans evolved to survive. Our ancestors needed to react to threats instantly, not pause to evaluate whether the shadow in the grass might be a predator.
The problem is that we’re now applying that same neural architecture to content specifically engineered to trigger emotional responses. A video of a baby animal in apparent distress activates your empathy and your protective instincts before your prefrontal cortex can ask whether it’s real. By the time the skeptical part of your brain catches up, you’ve already liked, shared, and moved on. The algorithm noted your engagement and queued up more of the same.
AI-generated animal content exploits this gap with ruthless efficiency. It’s designed to feel magical, surprising, heartwarming, or terrifying, whatever maximizes engagement. The creators don’t need you to remember the video or learn anything from it. They need three seconds of emotional response, and they’ve got tools that can manufacture that response at an industrial scale.
This is, in the most literal sense, social engineering. We usually use that term for phishing emails and phone scams, but the mechanics are identical: manipulate how people perceive reality, exploit emotional responses, and change behavior. The only difference is that these videos aren’t targeting specific victims. They’re targeting everyone, all the time.
The skier in China didn’t approach that snow leopard because she was stupid. She approached it because years of accumulated media exposure had trained her emotional brain to expect a certain outcome, and her analytical brain never got the chance to override that expectation before she was already out of the car and walking toward a predator.
I should tell you why this particular issue matters to me personally.
Years ago, I worked as a research associate studying gorillas at a zoo. Bachelor group, four silverbacks living together, which is exactly as impressive as it sounds. My job was observation: sitting for hours with an oversized calculator-type device, documenting behavior, learning to read the subtle communications that most visitors never notice. The tension in their shoulder, the way one male positions himself relative to another. The complex negotiations that happen without violence because gorillas, despite what viral videos might suggest, are not aggressive animals. They’re intelligent, social creatures with clear ways of setting boundaries, if you know how to read them.
One afternoon, I was conducting observations from a mesh area within the enclosure, within a secure space that connected to the keeper areas. The public could see me, and the gorillas could interact with me, but not touch me. The troop happened to be sitting at the front of the glass that day, their backs to the crowd.
A group of visitors approached the glass. I didn’t pay much attention at first; I was there for the gorillas. But they wanted my attention, so they waved, they jumped up and down. Then they banged on the glass.
I turned and mouthed: “Don’t bang on the glass.”
They ignored me. Instead, they made a camera motion with their hands, pointed at the gorillas, and twirled their fingers in the universal gesture for “turn around.”
They wanted me to make the gorillas face them so they could take a picture.
I shook my head “no”.
They made exaggerated sad faces, then frustrated ones. Then they flipped me off and walked away.
Their understanding of these animals was so distorted that they genuinely believed gorillas would perform on command, like the creatures they’d seen in movies and viral videos. They thought I was being difficult by refusing to make the animals cooperate.
Here’s the reality: those gorillas weighed between 350 and 600 pounds each, and they are wild animals despite living in a zoo. They didn’t care what I wanted them to do. I couldn’t have made them turn around anymore than I could have made them tap dance.
That interaction happened before AI-generated videos existed. Those visitors arrived with distorted expectations because of decades of media: movies, commercials, viral clips that taught them animals are here to perform for humans.
Now imagine that same expectation-warping content being generated by anyone, shared everywhere, at a volume that dwarfs everything that came before. That’s what AI-generated animal videos represent: the industrialization of a problem that already existed. And the facilities that house real animals, along with everyone who encounters wildlife anywhere, are the ones who have to deal with the consequences.
Let’s talk about what can go wrong, because this is where operational risk planning has to start.
Scenario one: You’re already managing a real incident when a fake video surfaces.
An animal at your facility has escaped its enclosure. Your team is executing your emergency protocols. Staff are securing areas, communicating with first responders, and managing visitor safety. Then someone posts an AI-generated video claiming to show one of your animals attacking a child.
The video goes viral within minutes. Local news picks it up. Your phone lines are overwhelmed. Emergency dispatches are suddenly fielding calls about an incident that didn’t happen while your team is trying to manage the incident that did. Families are panicking, and social media is exploding. You’re now fighting on two fronts: one real, one manufactured, with the same limited resources.
This is not theoretical. We’ve seen information operations used to create confusion during real-world events in other sectors. Zoos and aquariums, with their large visual footprints and their complicated relationships with animal advocacy movements, are obvious targets for this kind of amplification.
Scenario two: A threat actor deliberately targets your facility.
Someone with an agenda: an extreme activist group, a disgruntled former employee, someone who simply wants to cause chaos, creates an AI-generated video showing animal abuse at your facility. The enclosures are recognizable, and the species are ones you house. The footage is entirely fabricated.
By the time your communications team drafts a response, the video has been shared tens of thousands of times and has over a million views. Local media is calling. Donors are alarmed. Regulators are asking questions. Your staff is demoralized, watching as accusations they know are false spread and can’t seem to stop. The reputational damage may linger for years, long after the video is debunked, if it ever is fully debunked. Some percentage of the public will always remember seeing “that video from the zoo” without remembering it was fake.
Scenario three: Visitors arrive with dangerous expectations.
This is the snow leopard problem, playing out at your facility every day in smaller ways. Visitors who have watched hundreds of AI-generated videos showing wild animals behaving like pets arrive expecting animals to perform, to cooperate, to pose. They reach over barriers, ignore warning signs, and get frustrated when the animals don’t act like the ones in the videos.
Your staff must constantly manage these interactions. Sometimes the consequences are minor, such as disappointment, complaints, and negative reviews. Sometimes they’re serious; a visitor reaches into an enclosure, an animal reacts defensively, someone gets hurt.
The woman who approached that snow leopard spent ten minutes closing the distance, so she had time to reconsider. She didn’t, because everything in her media-trained brain was telling her this would be fine. Those same dynamics play out at zoological facilities every single day, and AI-generated content is accelerating it.
Scenario four: The slow erosion of trust.
This one is harder to see because it happens gradually. As more synthetic content circulates, people start questioning everything. Your authentic photos and videos are accused of being AI-generated. Your educational messaging gets dismissed. The connection between visitors and real animals, the big reason your facility exists, is getting harder to build because nobody knows what to believe anymore.
Staff morale suffers when their genuine work is questioned. Conservation messaging falls flat when audiences assume everything might be fake. The trust that took decades to build erodes one viral video at a time.
So, what do you do about this?
Most zoos and aquariums don’t have dedicated IT departments, let alone cybersecurity teams. That’s fine. This isn’t a problem that requires technical expertise to address. It requires planning, communication, and the same operational discipline you already apply to other safety risks.
Update your crisis communication plan.
If your business continuity or disaster recovery planning doesn’t include synthetic media scenarios, it needs to. You need to think through, in advance, how you would respond if:
A fake video surfaces claiming to show an incident at your facility
A real incident occurs and is amplified or distorted by fabricated footage
Your facility has become the target of a coordinated disinformation campaign
Visitors or media ask whether your authentic content is real
Who verifies whether content is authentic? Who drafts public statements? Who talks to the media? Who monitors social channels? You don’t want to figure this out while a fake video is going viral.
Establish a verification protocol.
Before anyone on your team responds publicly to a video claiming to show your facility, you need a quick, reliable way to determine whether it’s real. This means:
Checking internal records: keeper logs, security footage, enclosure cameras
Comparing against your library of authentic facility images
Talking to your staff who understands the behaviors of the animal(s) in the video
Having someone on staff who knows what AI-generated content typically looks like
The goal is speed without sacrificing accuracy. A premature denial of something real is as damaging as a slow response to something fake.
Brief your entire staff.
Everyone who interacts with the public needs to understand this issue, not as a theoretical concern but as something that affects their daily work.
Guest services staff should know what to say when visitors reference viral videos: “I’ve seen some of those videos online, a lot of them aren’t real. Let me tell you what these animals are like...” Turn the moment into education rather than confrontation.
Keepers and educators should be equipped with talking points about specific viral content relevant to your collection. If there’s a fake video of a gorilla smashing glass circulating, your primate team should be ready to explain what gorilla behavior really looks like and why the video doesn’t match reality.
Security staff should understand how synthetic media could be used to create confusion during an actual incident, and how to escalate concerns through your verification protocol.
Bring in a wildlife photographer.
If you work with a professional photographer, on staff or hired regularly, have them train your public-facing teams on what authentic wildlife photography looks like. How is it composed? What details indicate real conditions versus generated content? What should people look for when evaluating images online? Also, as a bonus, have them explain what their editing process is for their wildlife photography.
Photographers who do this work professionally are watching their own credibility erode as AI-generated content undermines it. Many would welcome the chance to help educate others about distinguishing reality from fake. Your staff will be better equipped to answer visitor questions, and you’ll strengthen a relationship with a professional who understands your mission.
Make this part of your public education.
This doesn’t have to be a lecture. In fact, it shouldn’t be.
At family events, set up a “Real or AI?” game station. Show visitors a series of images or short clips and have them guess which are authentic. Give out small prizes. Make it interactive. Kids love this kind of challenge, and parents learn alongside them.
At evening events such as your beer nights, wine tastings, and adult education programs, include a segment on AI media literacy. Five minutes on the subject, tied to whatever species you’re featuring, plants the seed without derailing the event or making it the whole theme of your event.
In your newsletters, include a recurring feature that addresses this directly. On social media, share content that shows what your animals are doing compared to what AI generates. Position your facility as a trusted source of accurate information.
The goal is to have conversations with your staff, your visitors, and your community. The more people understand that this content exists and how to spot it, the less power it has to shape dangerous expectations.
What about the average person scrolling through their feed?
If you’re reading this and you don’t work at a zoo or aquarium, you’re still part of the equation. Every time you engage with AI-generated animal content (even to criticize it), you’re telling the algorithm that this content generates attention. That means more of it gets created and promoted.
Here’s what you can do:
Pause before you share. That moment of “this is amazing, I have to share this” is exactly the reaction this content is designed to trigger. Take a breath. Ask yourself: Where did this come from? Is there a source credited? Does this match what I know about how animals behave?
Look for context. Real wildlife content from legitimate sources typically includes location information, facility or photographer credit, and context about what you’re seeing. Content that exists only to generate engagement usually lacks this, with only a caption designed to trigger an emotional response.
Check the account. Is this a zoo, aquarium, or conservation organization with a track record? Or is it a generic account that posts nothing but viral animal clips with no educational mission? The latter is a red flag.
Don’t engage with content you suspect is fake. Don’t share it, don’t comment on it, don’t even leave an angry reply explaining why it’s fake. All engagement tells the platform the content is working, which means it gets shown to more people. If you want to address it, do so on your own channels without linking to the original.
Report when appropriate. If content violates platform policies, and increasingly, unlabeled AI-generated content is a violation, report it. This is a small action, but it matters on a larger scale.
Spend time with real animals. This sounds simple, but it’s powerful. When you’ve watched gorillas, elephants, big cats, and marine life, you develop an intuition for what’s real. Your brain builds a reference library of authentic animal behavior, making synthetic AI content easier to recognize.
None of this is going to make AI-generated animal content disappear. Technology already exists; it’s only getting better and easier to use. The volume of synthetic content will increase, not decrease.
What we can control is how prepared we are to respond. We can develop plans, train staff, educate the public, and build institutional awareness to manage this risk rather than being blindsided by it.
Fifty years ago, one mechanical shark changed how millions of people related to an entire species. Marine biologists and conservationists are still fighting perceptions shaped by a single 1975 movie.
In the Winter of 2026, a woman who had absorbed a lifetime of media showing animals as willing participants in human photo opportunities walked toward a snow leopard for ten minutes, certain it would cooperate for her selfie.
We’re not going to prevent every bad decision. But we can stop manufacturing the expectations that lead to them, and we can make sure our facilities, our staff, and our visitors are prepared for a world where seeing is no longer believing.
Stay Safe, Stay Curious,
Kristin King