Known Systems. Hidden Risks. | Episode 2 — CCTV Isn’t Just Watching Anymore: It’s Talking, Sharing, and Sometimes Exposing You
By Luke, Co-Founder and Technology Leader, CMC Consultancy Partnership
Introduction — The Illusion of Containment
For decades, CCTV carried a simple assumption: containment.
A closed circuit. A controlled system. Cameras feeding into a recorder, monitored by authorised personnel and largely isolated from the outside world. For many security professionals, that mental model still holds—because operationally, very little appears to have changed. Cameras are installed, images are recorded, incidents are reviewed.
It works.
But what has changed—quietly, over time—is how those systems operate beneath the surface.
Modern surveillance is no longer closed. It is networked, integrated, and increasingly exposed—not through failure, but through design. Standards such as ONVIF enable interoperability across devices. Technologies like SRT and WebRTC enable remote access from virtually anywhere.
These are not flaws. They are the features that make modern systems powerful.
But they also change the nature of risk.
Because systems that were once passive observers are now active participants on the network—communicating, responding, and in some cases, revealing more than intended.
When the System Introduces Itself
Modern CCTV systems do not sit quietly on a network. They announce their presence.
Protocols designed for ease of deployment allow devices to discover one another automatically making easier work for the installer. Cameras, recorders, and management platforms can identify and connect with minimal configuration.
Operationally, this is efficient.
From a security perspective, it introduces a different reality.
That same mechanism does not distinguish between intended and unintended interaction. Any device within the same environment—whether authorised or not—can observe what is being presented.
In practical terms, this means a system can begin to describe itself:
How many devices exist?
How they are structured?
How they are identified within the environment?
This is not an exploit. It is normal behaviour.
But it provides insight—insight that, in the wrong context, becomes intelligence.
Access — The Assumption That Still Holds Too Long
If discovery creates visibility, access determines exposure.
And here, one of the most persistent issues in CCTV remains unchanged: assumptions around control.
Systems are installed, configured, and left to operate. Credentials are set, remote access is enabled, and over time, the system becomes part of the background.
It is trusted.
But modern deployments rarely remain internal. Remote viewing, third-party support, and integration with other systems mean that access pathways extend far beyond the original design.
Platforms such as Shodan simply catalogue what is already exposed. They do not create the risk—they reveal it.
What they consistently show is not sophisticated compromise, but something more straightforward:
Systems that are accessible because the assumptions around them were never revisited.
A Real-World Example — When the System Wasn’t the Problem
In a recent engagement within a large organisation in the UK’s financial district, the brief was to review a CCTV system that, on the surface, was performing exactly as expected.
The platform was well-regarded. The deployment was considered. The system was in active use.
But two issues had begun to emerge:
Challenge One — When a Single Device Changes the Risk Profile
The first issue presented as a system fault.
The CCTV platform had been configured with a structured alert hierarchy—designed to guide operators from awareness through to action, while also flagging system-level issues.
One category of alerts—those relating to infrastructure—began to behave inconsistently.
At first, this appeared to be a configuration issue.
It wasn’t.
Investigation identified that a single camera within the estate was operating outside of expected parameters. Not because it was defective, but because it was more capable than the surrounding system had accounted for.
Over time, that device had effectively become an independent access point within the environment.
Not replacing the CCTV system—but bypassing the assumptions around how it was accessed and controlled.
This introduced two immediate concerns.
First, the device allowed a level of interaction with the surveillance environment that would normally be restricted to authorised operators. Not full system control—but enough visibility and influence to alter how that device behaved and presented itself.
Second, it introduced the ability to retain information locally, outside the standard recording and audit pathways of the primary system.
More subtly, the device was also interacting with the wider environment—identifying other cameras across the estate based on how they were exposed. Not their video—but their presence, their naming, and by extension, their location.
Why This Was Significant
At no point did the core system fail.
Recording continued.
Monitoring continued.
Operations continued.
But the assumption of centralised control was no longer accurate.
A single device had created:
An alternative path of visibility
A partial duplication of capability
And a source of information not governed in the same way as the primary system
The system was working exactly as designed.
The problem was that not every part of it was being considered as part of the system.
Challenge Two — When Intelligence Is Layered Without Context
The second challenge was not a fault, but an enhancement.
The organisation was exploring the introduction of AI into their surveillance environment—seeking to improve detection and reduce reliance on manual monitoring.
A solution was trialled that sat over the existing system, analysing video streams and generating alerts.
On paper, it was straightforward.
In practice, it introduced a new layer of complexity.
When the System Sees, But Doesn’t Understand
Across multiple environments where similar technologies are deployed, a consistent pattern emerges.
The cameras capture everything.
The system processes everything.
And yet, certain moments still pass without recognition.
In retail environments, behaviour that would raise concern for experienced staff—pacing, repeated interaction, visible agitation—often remains unclassified.
In public spaces, variability becomes the baseline. Movement that may indicate intent blends into the broader pattern of activity.
In structured environments, such as schools or corporate sites, behaviour that feels out of place—lingering, hesitation, repetition—remains within acceptable bounds until a rule is explicitly broken.
The system is not failing.
It is operating within its design.
Why AI Surveillance Systems Don’t See Everything
Understanding the Boundaries of Machine Perception
The limitations of AI in surveillance are not isolated failures. They are the result of how these systems are built.
They are trained on data—predominantly normal activity—and optimised to detect defined patterns. This makes them highly effective at recognising what is typical, but far less capable of identifying subtle deviation.
They observe behaviour, but do not understand context.
They operate on thresholds, filtering out uncertainty.
They analyse moments, not always progression.
And fundamentally, they are designed to detect—not to interpret ambiguity or predict intent.
This creates a predictable blind spot.
The early stages of escalation—the moments where intervention would be most effective—are often the least defined, and therefore the least likely to be flagged.
The intersection of AI and surveillance infrastructure raises questions significant enough to warrant dedicated examination. We will be returning to this subject in a forthcoming bonus episode of the Known Systems. Hidden Risks. series — exploring in depth how artificial intelligence is being layered onto surveillance environments, and where the blind spots it creates may prove more consequential than the visibility it adds.
A Familiar Pattern
What emerges from both challenges is not a failure of technology.
It is a pattern:
Systems becoming more capable
Environments becoming more connected
And overall visibility becoming less clear
Each layer—devices, integrations, AI—adds value.
But each also adds complexity, and with it, distance from a complete understanding of how the system behaves as a whole.
What This Means on Monday Morning
For most security professionals, the issue is not awareness—it is clarity.
Because operationally, everything still appears to work.
The cameras are recording.
The system is accessible.
The environment is monitored.
But a system can be working—and still be exposed.
The more relevant question is not whether the system functions, but:
How much of it is visible, accessible, or influenced beyond those who are meant to control it?
Modern surveillance does not exist in isolation. It sits within shared environments—networks, integrations, third-party interactions—where boundaries are less defined than they appear.
Responsibility, too, is often distributed.
Not through negligence, but through evolution.
And that creates a gap.
Not in capability—but in understanding.
A Shift in Perspective
This does not require technical redesign.
It requires a shift in mindset.
From assuming the system is contained…
To questioning where its boundaries actually are.
From assuming access is controlled…
To understanding how that control is maintained.
From assuming visibility is one-directional…
To recognising that the system itself may be visible in ways not previously considered.
A More Useful Question
The most important question is no longer:
“Is our CCTV system secure?”
It is:
“Do we fully understand how exposed it is within our environment?”
Because in most cases, the risk is not a single failure.
It is a collection of reasonable decisions made over time—each one logical, but together, changing the nature of the system.
Conclusion — Seeing the System Clearly
CCTV still does what it has always done.
It observes.
It records.
It provides visibility.
But it now does so within an environment where visibility is no longer one-directional.
The system is not just watching.
It is being observed, interpreted, and in some cases, exposed.
The question is no longer whether it works.
It is whether it is understood.
Where CMC Fits
CMC operates at the intersection of physical security systems and networked environments.
We help organisations move beyond operational confidence to true system understanding—assessing how surveillance systems are deployed, how they interact, and where exposure exists.
Because in modern environments, a system that functions…
is not necessarily a system that protects.