Hackers are training daily. Are you?

It was Tuesday morning in Kampala, 08:17 a.m., when a mid-sized financial services firm opened for business as usual, staff logging into their systems, coffee cups still warm, unaware that somewhere across the city, a young man in a dimly lit room had already run through three attack simulations before breakfast, refining scripts, testing vulnerabilities, and preparing for the exact environment he had studied for weeks.

By 10:42 a.m., the company had lost access to its internal file server, mobile money reconciliation reports were corrupted by 11:15 am, and by 2:30 p.m., a quiet panic had settled in the office, not because systems had failed, but because nobody could confidently explain how.

That is where I came in as part of the Summit Consulting Ltd and Institute of Forensics & ICT Security team. I stood in the boardroom that evening, looking at a team of intelligent, experienced executives, and I asked, when was the last time your organisation trained like an attacker? In this piece, I will walk you through what really happened because the lesson is not about technology, it is about discipline.

The attacker trains like a professional athlete.

The individual we later identified as Suspect 1, a slim young man with a habit of documenting everything meticulously, had not “hacked” the organisation in one moment of brilliance. That is a myth leaders tell themselves to feel better. He trained daily and had built a replica environment using publicly available information, LinkedIn profiles, job descriptions, and even snippets from staff social media posts. From those fragments, he reconstructed the company’s likely technology stack and internal processes with surprising accuracy. Then he rehearsed repeatedly.

Four things stood out from the forensic reconstruction. First, he did not attack systems first. He attacked understanding. He mapped people, roles, and authority flows before touching a single endpoint and spent a lot of time doing footprinting to gather as much information about the target as possible. Second, he practised entry points that looked legitimate, password spraying, phishing drafts and MFA fatigue simulations. All were tested in controlled environments before deployment. Third, he refined timing, knew exactly when staff were busiest, when attention dropped, and when approvals were rushed. Fourth, he documented failures. Every failed attempt improved the next one. That was training, not luck.

Now compare that with the organisation’s posture. They had conducted a cybersecurity awareness session twelve months prior.

At the Institute of Forensics & ICT Security during trainings, I tell executives to do this exercise live, and I want you to imagine I am standing in front of you now. Take a sheet of paper, write down the last three things your organisation trained on in cybersecurity. Now write down the last three things an attacker is likely training on today.

Pause, circle the overlap. There is usually none. That gap is where breaches are born.

The entry point was not technical; it was human

Suspect 2, a middle-aged staff member with a reputation for being efficient but often overloaded, became the unwitting entry point. Not because she was careless but because the system around her assumed she would always have time to think. At 09:13 a.m., she received what appeared to be an internal IT escalation email. The language was familiar, the tone matched previous communications, and the urgency was believable.

What most investigators miss, and what defence counsel often attacks, is the question of plausibility. Could a reasonable person have believed this email? In this case, yes, because the attacker had trained on internal communication styles.

Four critical insights emerged.

  • The email domain was spoofed with near-perfect similarity. A single character difference that most systems did not flag.
  • The message referenced an actual ongoing system update, information gathered from staff conversations on external platforms.
  • The call to action was simple and routine. Re-authenticate access.
  • The timing coincided with a real internal IT activity, creating contextual legitimacy.

She clicked, credentials captured, and no alarms triggered. To drive this point home, I want you to try the following now. Open your last ten internal emails from IT or finance, study the tone, the structure, the sign-offs. Now imagine you are an attacker trying to replicate that perfectly. Ask yourself, would your current systems detect that imitation? Most organisations realise the truth. Their controls are built for obvious attacks, not intelligent ones.

Lateral movement was quiet and disciplined.

Once inside, Suspect 1 did not rush. This is where many investigations go wrong. Teams assume attackers move fast. In reality, sophisticated attackers move carefully. Over the next three days, he navigated the system like a patient lawyer building a case, gathering evidence, testing access, and avoiding noise.

Four key behaviours defined this phase.

  • The suspect used legitimate credentials, no brute force, no noise. Just normal logins from slightly unusual locations.
  • He escalated privileges gradually, exploiting minor misconfigurations that had been flagged in previous audits but never fully resolved.
  • He blended in. Access patterns mimicked normal staff behaviour, including working hours and system usage sequences.
  • He avoided sensitive systems initially. He built confidence in his access before targeting financial processes.

This is the phase that defence counsel often questions. Where is the proof of malicious intent? The answer lies in patterns, not single events, repeated access to systems outside normal roles, Incremental privilege escalation, and Data access sequences that do not align with job functions. These are the fingerprints of intent.

Take one user in your system, map their normal access for one week and then design a scenario where that same access is used for malicious purposes without triggering alerts. If you can design it, someone else already has.

The financial trigger was subtle, not dramatic

The actual financial manipulation was not a large transfer that would have been detected; instead, Suspect 1 exploited reconciliation gaps between mobile money collections and internal ledger postings. Small adjustments, distributed and almost invisible.

Over five days, multiple transactions were slightly altered before reconciliation, creating a cumulative discrepancy that only became visible when aggregated. This stage was defined by the following.

  1. Attackers target process gaps, not systems. The weakness was in reconciliation timing, not technology.
  2. Small anomalies escape detection when thresholds are set for large fraud.
  3. Internal collusion was not required. The system design itself created an opportunity.
  4. Audit trails existed, but they were not actively monitored in real time.

The loss was not immediate; it accumulated without notice.

Do this yourself: Review your last month’s transactions. Identify all adjustments below your reporting threshold and sum them. What if someone is deliberately operating just below your visibility line?

Detection came from curiosity, not systems

The breach was not detected by sophisticated tools, it was noticed by Suspect 3, a junior auditor with sharp instincts, who felt something was off during routine reconciliation. That word matters. Off, not wrong or broken, just slightly inconsistent, she escalated. That is where many organisations fail.

Escalations are often ignored when they lack clear evidence. In this case, they were not. This detection was possible because:

  • Human intuition remains one of the strongest controls when supported, not dismissed
  • Anomaly detection systems are only as good as the thresholds and rules defined
  • Escalation culture determines detection speed more than technology.
  • Early signals are often weak and easily rationalised away.

Investigation succeeded because of discipline

When we were called in, the objective was clear.

  1. Preserve evidence.
  2. Reconstruct events.
  3. Establish intent.
  4. Anticipate legal scrutiny.

This is where many investigations collapse. We approached it like a courtroom case from day one and that is what separates great investigations from reviews or audits.

Four principles guided the investigation.

  1. Chain of custody was maintained meticulously. Every log, every device, every access record documented and preserved.
  2. Timelines were reconstructed to the minute, not approximations or precision.
  3. Alternative explanations were tested. Could this be a system error or an internal mistake? We challenged our own assumptions before others could.
  4. Evidence was correlated across systems. Emails, login logs, transaction records, device fingerprints.

By the time the case was presented, it did not rely on a single piece of evidence on convergence but multiple independent data points telling the same story. That is what courts respect.

What leaders miss and attackers exploit

Let me bring you back to the boardroom. The CEO asked me a final question.

“What should we have done differently?” I replied that you do not rise to the level of your policies; you fall to the level of your training.

Here are the realities every leader must confront.

  1. Attackers are disciplined learners; they improve daily.
  2. Organisations are episodic learners; they train occasionally.
  3. Attackers simulate real scenarios. Organisations discuss theoretical risks.
  4. Attackers measure progress, yet organisations assume compliance equals readiness.

That imbalance is the real vulnerability.

What else you must know

Cybersecurity is not only a technical function but an organisational behaviour. Fraud is no longer about large theft; it is about intelligent, incremental exploitation. Investigations are no longer reactive. They must be anticipatory, designed with legal scrutiny in mind from the start, and most importantly, training is not an event but a system.

If you do not build a system that trains your people, your processes, and your leadership continuously, then someone else is training to exploit you. The question is no longer whether hackers are training; they are. The real question is whether you are willing to match that discipline with your own.

Copyright IFIS 2026. All rights reserved.

 

Previous Post
Next Post

About Company

At the Institute of Forensics & ICT Security (IFIS), we specialize in bridging the gap between knowledge and application.

Most Recent Posts

  • All Posts
  • Blog
  • Career Management
  • Computer Security
  • Cyber Defence
  • Cyber Incidence Response
  • Cyber Preparedness
  • Cyber Security
  • Data Privacy
  • Endpoint Security
  • Fraud Investigation and Examination
  • Fraud Management
  • IT Security Audit
  • Marketing
  • Mobile Security
  • Training
  • UX/UI Design
  • Web Development

Category

Tags

You have been successfully Subscribed! Ops! Something went wrong, please try again.

About Us

 we specialize in bridging the gap between knowledge and application.

Recent news

  • All Post
  • Blog
  • Career Management
  • Computer Security
  • Cyber Defence
  • Cyber Incidence Response
  • Cyber Preparedness
  • Cyber Security
  • Data Privacy
  • Endpoint Security
  • Fraud Investigation and Examination
  • Fraud Management
  • IT Security Audit
  • Marketing
  • Mobile Security
  • Training
  • UX/UI Design
  • Web Development

© 2025 All rights reserved Institute of Forensics and ICT Security | IFIS is the training arm of Summit Consulting Ltd