Why familiar faces in an organization often hide the biggest risks

A cybersecurity case leaders rarely want to hear

The breach did not come through the firewall. It came through a smile. On paper, the organisation was doing everything right. Firewalls patched.

Antivirus updated, and external penetration tests passed. The board slept well, believing the risk lived “outside”; hackers in hoodies, foreign IP addresses, dark web threats.

The reality was closer. Much closer. The breach began with a familiar face. A long-serving systems administrator. Ten years in. Trusted and dependable. Always available when systems went down at night. The kind of person whose access requests were approved without hesitation.

That is how most serious cyber incidents begin.

Familiarity is not trust. It is exposure.

In cybersecurity, the biggest threat is not malicious intent. It is unquestioned access.

The administrator did not set out to steal data. That is important. He was under pressure: personal debt, school fees, side hustles. He reused credentials across systems “to save time.” He disabled certain logs because “they slowed the system.” He shared admin passwords informally with a colleague during a crisis and never rotated them back.

None of this triggered alarms. Why would it? He was one of “us.”

Then a phishing email landed in his inbox. Not dramatic. Not sophisticated. It referenced an internal system upgrade and used language copied from previous internal emails. He clicked. The attacker did not need to break in. They were invited.

Within hours, lateral movement had begun. Privileged access meant the attacker could see everything: user directories, financial systems, and backups. The breach went undetected for weeks because the activity looked normal. It was executed using a legitimate account, at normal hours, by someone who had always been there.

This is the part boards struggle with: cybersecurity fails socially before it fails technically.

Why familiar faces are the hardest risks to see

First, they blend into noise. Security teams are trained to look for anomalies. Familiar users generate none. Their behaviour becomes the baseline.

Second, leaders override controls for them. “He needs quick access.” “She has been here for years.” Temporary exceptions accumulate. Cyber risk compounds quietly through kindness.

Third, reporting lines blur. When someone is both critical to operations and deeply trusted, no one wants to challenge them. Reviews become ceremonial. Access recertification becomes box-ticking.

I have seen this pattern across banks, universities, and government agencies. The longest-serving staff often carry the widest, least-reviewed access. Not because they are bad, but because no one ever went back to redesign the system around growth.

The red flags that were missed

In this case, the signs were there. System logs were thinner than expected. Privileged access had never been reduced despite role changes. Backups were accessible from production accounts. Alerts were configured, but no one reviewed them regularly.

Most tellingly, cybersecurity was still framed as an IT issue. The board asked about tools, not behaviours. Budgets were approved for software, not for access governance or insider threat modelling.

The breach was discovered only after customer data appeared on a forum. By then, the question was no longer “how did this happen?” but “why didn’t we see it coming?”

The lesson for leaders

Cybersecurity maturity is measured by how you manage the people you trust most. If your most familiar faces have never had their access challenged, you are exposed.

If your cybersecurity dashboards never discuss insider risk, you are blind. If your board is comfortable, your organisation is not safe.

This is not an argument for suspicion. It is an argument for discipline. Trust should trigger stronger controls, not weaker ones.

What boards and executives must do differently?

Reframe cybersecurity as an organisational risk, not a technical one. Demand regular privileged access reviews led independently of IT. Rotate credentials ruthlessly. Separate operational heroism from control design.

Most importantly, ask one contrarian question at the board level: “If I wanted to steal from us quietly, who would I need to be?”

The answer is rarely a stranger. Cybersecurity does not fail because leaders do not care. It fails because they care selectively. And familiarity is the most dangerous selection bias of all.

Previous Post

About Company

At the Institute of Forensics & ICT Security (IFIS), we specialize in bridging the gap between knowledge and application.

Most Recent Posts

  • All Posts
  • Blog
  • Career Management
  • Computer Security
  • Cyber Defence
  • Cyber Incidence Response
  • Cyber Preparedness
  • Cyber Security
  • Data Privacy
  • Endpoint Security
  • Fraud Investigation and Examination
  • Fraud Management
  • IT Security Audit
  • Marketing
  • Mobile Security
  • Training
  • UX/UI Design
  • Web Development

Category

Tags

You have been successfully Subscribed! Ops! Something went wrong, please try again.

About Us

 we specialize in bridging the gap between knowledge and application.

Recent news

  • All Post
  • Blog
  • Career Management
  • Computer Security
  • Cyber Defence
  • Cyber Incidence Response
  • Cyber Preparedness
  • Cyber Security
  • Data Privacy
  • Endpoint Security
  • Fraud Investigation and Examination
  • Fraud Management
  • IT Security Audit
  • Marketing
  • Mobile Security
  • Training
  • UX/UI Design
  • Web Development

© 2025 All rights reserved Institute of Forensics and ICT Security | IFIS is the training arm of Summit Consulting Ltd