It started quietly as a finance officer at a local organisation logged into the accounting system late in the evening, long after the rest of the office had gone home. The payment looked ordinary, a supplier invoice, the amount was not unusual, and the approval trail existed. The transaction went through mobile money first, then a bank transfer, and finally disappeared into a chain of wallets that investigators would later spend months tracing. By the time the organisation noticed the loss, the money was gone. The story is not unusual anymore. It is becoming the new pattern of fraud across Uganda’s companies, quiet, digital, and painfully precise. The criminals no longer break doors or forge signatures, they log in. What happened The incident began with a simple weakness: access. One employee had system privileges that were never reviewed after a promotion. Another could approve payments remotely because the organisation wanted flexibility during travel. A third person handled vendor onboarding with no independent verification. Three small gaps, nothing alarming, together they created a corridor, and the fraud scheme unfolded in stages. First came information gathering. The perpetrators watched internal email patterns and accounting workflows for several weeks, learned how invoices were processed, who approved payments, and when finance staff were busiest. Digital criminals rarely rush, they study. Then came identity imitation. A fake vendor account appeared in the procurement system. It looked legitimate because the supporting documents were copied from a real supplier whose website had public documents available. The bank account, however, belonged to a different entity controlled by the fraud ring. The invoice was uploaded, and the approval happened quickly because the amount fell below the escalation threshold. That detail is common in many fraud cases. Criminals prefer transactions small enough to pass unnoticed, but large enough to matter. The payment left the system. At this stage, the fraudsters moved with speed. The funds were split across multiple mobile money wallets and digital accounts within hours. Each transfer created another layer between the stolen money and the perpetrators. By the next morning, the trail was already complex. Under Uganda’s legal framework, this activity qualifies as electronic fraud and unauthorised computer access, offences recognised under the Computer Misuse Act. The law treats the use of computers to obtain unlawful gain as a criminal offence with significant penalties upon conviction. But law alone does not prevent fraud; controls do. How the fraud was noticed Fraud is rarely discovered through heroics, but irritation. In this case, the irritation came from a junior accountant who noticed a small mismatch during monthly reconciliation. The supplier’s name on one payment did not appear in the procurement register used in the previous quarter. It was a small detail that most people ignore. This accountant asked a question, where did this vendor come from? That question triggered the investigation. The accountant, against protocol, called one of the officers in Internal Audit and gave them the red flag. Internal auditors reviewed the vendor onboarding documents and noticed the bank account verification form lacked an independent confirmation from the supplier. The system log then revealed something more troubling. The vendor profile had been created from the login credentials of an employee who was officially on leave that day. That discovery changed everything. Then digital evidence began to tell the story. What the investigation revealed Investigators reconstructed the sequence using system logs, email metadata, and mobile transaction records. This is where modern fraud investigations differ from the past. The evidence is not hidden in drawers; it is buried in data. Every login leaves a trace, every transfer records a timestamp, and every system modification creates a digital footprint. The forensic review revealed that the employee’s account had been accessed from an external IP address late at night. The password had been compromised weeks earlier through a phishing email disguised as a system upgrade request. Once inside the system, the attacker moved slowly, created the vendor profile, uploaded the invoice, submitted the payment request, and waited. When approval came through, the payment moved instantly. The entire fraud operation, from initial access to final transfer, took less than fifteen minutes. This pattern is increasingly visible in cyber-enabled fraud cases reviewed by Ugandan courts, where electronic records and digital trails have become central evidence in determining liability and proving unlawful computer access. Technology creates crime, and technology exposes it. The legal reality organisations ignore Many organisations misunderstand their legal position after a cyber fraud. They believe the loss ends with the stolen money, but it does not. When investigators begin reviewing events, the focus shifts to governance. Who controlled access? Who approved payments? Who verified vendors? Courts increasingly examine whether reasonable controls existed before the fraud occurred. Electronic evidence must also be properly authenticated and preserved if it is to be relied upon during legal proceedings. This creates a difficult reality for organisations. If your systems cannot produce reliable logs, you may struggle to prove what actually happened. I usually recommend that all network logs, db logs, system logs, etc., be backed up off-site to a location that even your IT has no access to. This provides a second layer of security. That way, forensics can help unravel what happened. Cybercriminals know how to hide their tracks. However, remote backup of all logs with limited access helps make it tougher for them. When digital evidence is weak, accountability becomes complicated. Fraud investigations, therefore, begin long before a crime occurs and in system design. The technology behind modern fraud Fraud today operates like a small technology company. The attackers use phishing tools to steal passwords, deploy automated scripts to test system access, and rely on mobile wallets and digital banking channels to move funds quickly. The objective is always the same speed. Digital fraud thrives on the time gap between a transaction and the moment someone notices something unusual. That window may be hours or days, depending on the organisation’s controls. Once the money enters a network of accounts, tracing it becomes difficult.
Penetration with a purpose
It started with a routine system upgrade on a Thursday evening in the city. By Monday morning, three vendors had been paid twice, one internal wallet showed a balance that did not exist at the bank, and the IT manager was insisting it was just a sync issue but it was not. It was a control failure that created a window. Someone noticed, someone used it. And the organisation had no idea how deep the entry went. That is where penetration with a purpose begins. Penetration without purpose is vandalism. Penetration with purpose is a legally authorised, tightly scoped attempt to expose weaknesses before a criminal does. The difference is consent, documentation, and discipline. In Uganda, that distinction is not abstract. The Computer Misuse Act criminalises unauthorised access and interference. If you touch systems without written authority and defined scope, you have crossed a line, even if you call yourself a tester. Authority must be explicit, dated, signed, and limited. Scope creep is not bravery; it is liability. A proper mandate answers five questions before a single packet is sent: which systems, what methods, what time window, what data may be accessed, and how evidence will be handled. If those are not settled, the engagement is reckless. Now to the mechanics. Purpose-driven penetration testing is not about breaking everything. It is about mapping how a real attacker would move through your environment, step by step, and then proving whether your controls stop them. Start at the edge. Email phishing remains the most reliable entry point in East Africa. A controlled simulation tests whether staff approve login prompts without reading them, whether multi-factor authentication is enforced consistently, and whether security awareness is cosmetic or real. When a single compromised account gives access to shared drives, financial approvals, or payment portals, you have not tested security. You have tested culture. Move to identity. Most breaches in our environment hinge on privilege mismanagement. Shared admin accounts. Dormant users are never disabled. Contractors with lingering access. A targeted test attempts lateral movement: can a low-level account escalate privileges through misconfigured permissions, weak password policies, or exposed administrative interfaces? If yes, your risk is not theoretical. Then payments. In Uganda’s mobile money and aggregator-heavy ecosystem, the most dangerous weakness is not database theft. It is transaction logic manipulation. A purposeful test will examine callback URLs, API authentication, webhook validation, and reconciliation routines. Can someone replay a successful transaction payload? Can they manipulate internal confirmation flags without bank settlement? Can refunds be triggered without dual control? These are not abstract risks. They are practical attack paths observed repeatedly in local incidents. You go deeper still. Logs. A penetration with purpose does not only test whether systems can be breached, it tests whether breaches can be detected. If an ethical tester creates a new admin account and no alert is triggered, your monitoring is decorative. If failed login attempts go unnoticed, your detection is asleep. If payment mismatches are resolved manually without root cause analysis, your governance has normalized deviation. Detection is evidence of maturity. Legal discipline matters here. When testing touches personal data, the Data Protection and Privacy framework imposes obligations. Access must be minimised. Data must not be exported casually. Findings must be secured. Evidence must be preserved with integrity, hash values, timestamps, documented tools, and chain of custody. Courts will not accept “we saw it on the screen” as proof. They will ask how it was collected, who handled it, and whether it could have been altered. A penetration exercise without evidence discipline is a board presentation. A penetration exercise with evidence discipline is litigation-ready. Now consider the human layer. In one recent case, the technical test found little. Firewalls were configured properly, MFA was active, Endpoints were patched, yet funds still moved irregularly. The ethical intrusion expanded, lawfully, to process review. It was discovered that finance staff routinely bypassed system-generated exception alerts because the bank sometimes delays posting. That normalisation of deviation created the real vulnerability. An insider could mask fraudulent reversals within accepted noise. Technology did not fail first but Behaviour did. Penetration with a purpose, therefore, includes interviews, walkthroughs, and segregation-of-duties mapping. Who can initiate payments? Who can approve? Who can reconcile? Who can override? If the same two individuals control the entire chain, your system is a theatre of controls, not a fortress. Closure is not a slide deck. A serious engagement ends with four deliverables. First, a technical narrative. Entry point, privilege path, action path, detection gap. Second, evidence logs. What was accessed, how it was accessed, and what proof exists. Third, quantified impact modelling. If this were malicious, what would the financial exposure be in 30 days? 90 days? Under stress? Fourth, remediation is mapped to ownership and timeline, not improve security, specific actions. Disable shared accounts by Friday. Enforce hardware-based MFA within 60 days. Separate refund approval from refund initiation. Implement automated reconciliation alerts tied to threshold triggers. Purpose means measurable change. There is also a governance dimension that most boards ignore. If you commission penetration testing only after an incident, you are reacting. If you commission it annually but ignore remediation budgets, you are performing compliance theatre. Mature organisations integrate testing into enterprise risk management. They link findings to risk appetite, capital planning, and audit committee oversight. In our evolving regulatory climate, digital evidence and cyber resilience are no longer optional topics. Electronic transactions carry legal weight and digital records can determine liability. If your organisation cannot prove the integrity of logs, the authenticity of transactions, and the reliability of controls, you will struggle in dispute resolution. Penetration with a purpose, therefore, sits at the intersection of law, finance, and technology. One last point. Many executives secretly fear penetration testing because it exposes uncomfortable truths. That fear is misplaced. Criminals are already probing your systems daily. The only question is whether you will discover the weaknesses first, under controlled conditions, with legal protection and documented scope, or whether a regulator, customer, or prosecutor will discover
The art of Ethical Intrusion
On a Tuesday morning, a mid-sized organisation in Kampala woke up to a quiet disaster. Money had not vanished dramatically, it had simply stopped arriving. Customer payments were marked successful on the phone, but the core system showed pending, the finance team’s reconciliation sheet began to look like a lie, and by 11:07 a.m. the CFO’s phone was hot with calls from operations, the bank, and a regulator facing compliance officer who could already smell a reporting obligation. In Uganda, most of the cybercrime does not start with movie-like hackers. It starts with a small control failing, then a second control being handled manually, and then a person, often inside the fence, using that temporary looseness to slip money through the cracks. The ethical intrusion that followed was not a test. It was a controlled entry into a live crime scene, where every click could contaminate evidence, every copied file could be attacked as altered, and every overconfident action could itself become a criminal exposure under Uganda’s Computer Misuse Act. The legal line that matters is simple, authority. You may probe systems only with clear, documented permission, a tight scope, and preservation discipline. “I was just checking” is not a defence when an investigator touches systems without authority, interferes with service, or handles personal data carelessly. What ethical intrusion really is Ethical intrusion is not breaking in for a good reason. It is a consent-driven, evidence-based method of answering four questions that courts and boards care about: what happened, how it happened, who benefited, and what must change so it never happens again. Treat it like a hospital triage in Mulago. You do not start by arguing about who caused the accident, you instead stop the bleeding, stabilise the patient, record the vitals, and preserve the story while it is still true. In practice, that means two parallel tracks run at the same time. The first is containment, because ongoing loss is a governance failure in real time. Revoke suspicious sessions, rotate exposed keys, lock down privileged accounts, and isolate affected payment channels while keeping business running. The second is preservation, because you will later need to prove integrity. Logs, endpoint images, cloud audit trails, payment processor callbacks, and the mobile money trail that ties digital actions to cash outcomes. If you do not preserve properly, you may still solve the incident operationally, but you will struggle to prove it in a disciplinary process, a criminal file, or a civil recovery claim. How the scheme usually works Most Ugandan payment fraud in connected systems follows a familiar pattern, a legitimate process is left intact, but the truth source is shifted. The following are indicative timelines of the attack scenario. At 09:12, Suspect 1 triggers access. The entry point is rarely sophisticated: reused passwords, shared admin credentials, a staff member tricked into approving a login prompt, or a device left without basic hardening. Once inside, the attacker avoids noisy actions and goes after the configuration. At 09:18, the attacker plants persistence. That might be a hidden mailbox rule, an extra API token, a new “service account,” or a quietly elevated user that looks like an IT artefact. At 09:26, the attacker targets the payment handshake. In many local stacks, payments flow through an aggregator or mobile money channel into a callback URL that updates the internal system. The fraudster does not need to steal money directly from the bank; they only need to manipulate the organisation’s internal confirmation logic so goods or services are released, refunds are triggered, or internal balances are credited falsely. At 09:41, the attacker begins low and slow. Instead of draining a single large amount, they run repeated micro-transactions, timed with peak activity, because humans do not notice small mismatches when the queue is long, and the pressure is high. At 10:05, internal collusion becomes the accelerant. Suspect 2, sitting in finance or operations, explains away exceptions, network delay, aggregator downtime, and bank posting cycle. The organisation starts normalising a control break. At 10:37, cash-out happens locally. The fraud may exit through mobile money withdrawals, agent float, staged refunds, or supplier payments. The digital event is fast; the cash conversion is where you catch people. This is why ethical intrusion must include financial forensics, not just cybersecurity. Technology tells you what was done, and money tells you why. How it gets noticed in the real world Local organisations do not usually detect cybercrime through fancy dashboards; they detect it through annoyance. The auditor sees that reversals are rising, but the incident log is empty. The CFO sees that revenue is up, but bank balances are flat. Customer care sees a pattern of complaints that sound similar, spaced like someone is running a script, and Operations sees stock moving faster than cash. Those are not red flags but the organisation’s immune system is doing its job. The ethical intruder’s job is to take those symptoms and rebuild the timeline with evidence that can survive challenge. The evidence discipline that keeps you safe If you want your findings to stand, you treat digital evidence like a sealed exhibit, not like a screenshot on a phone. Start with chain of custody. Record who collected what, when, from where, using which method, and where it was stored afterwards. Then, verify integrity using hashing, mathematical fingerprints that allow you to show a file did not change after collection. Work from copies, not originals. When imaging devices or collecting server artefacts, generate complete file listings with paths, timestamps, and hash values, and document the tools and settings used. Avoid helpful edits. Export chats and logs in a way that retains metadata; do not forward messages, do not re-save images, do not compress files for convenience. Courts distrust evidence that looks curated, and rightly so. This is also where privacy law becomes operational; personal data you encounter during an intrusion is not yours to spread around the office. Limit access, minimise collection, and document the lawful basis for handling it as part of
The enemy wears an ID badge
It was a Thursday evening. The kind of evening when the city exhales, traffic thins, and executives convince themselves that the week has behaved. Then a number refused to behave.UGX 6,200,000. It appeared in the payables reconciliation as a timing difference. Not a loss. Not yet. Just a variance that would clear next cycle, but it did not clear; it lingered. And in my line of work, lingering numbers are like a faint smell of smoke in a grass-thatched house. You do not see flames, but you know something is burning. There was no dramatic alarm and no shouting across corridors. Only a quiet tightening in the chest of a finance manager who had seen enough storms to recognize the first drop of rain. Mpora mpora. Small small. Little by little. That is how the ledger began to empty. The trusted hand that fed from the pot. He was neither the loudest nor most flamboyant in the office. We will call him Suspect 1. Medium build, soft-spoken, efficient, and the kind of man you trust to close the office because he always remembers to switch off the lights. He had system access, had delegated authority, and he had proximity to vendor onboarding. And he had discovered something. Below UGX 5 million, supplier bank detail changes did not require a second approval. It was a legacy configuration, installed years earlier when the company was smaller, and trust felt cheaper. So he created what I call mirror vendors, same names, and slight spelling differences, a misplaced letter, and an added space, invisible to a casual eye. Then he waited. Payments due to legitimate suppliers were intercepted at the last minute. Bank details altered. Fundswere rerouted to mobile money wallets registered under national IDs belonging to relatives in distant districts, withdrawn in cash, reassembled elsewhere, and invested quietly. Not one large theft. Never dramatic. Always beneath the threshold. Mpora mpora. In any village, when a trusted worker suddenly begins building rental rooms or purchasing boda bodas for family use, elders whisper. In corporations, we do not whisper; we rationalize. “He must have side businesses.” “He is hardworking.” But money speaks. And money, when interrogated properly, never lies. Digital footprints do not fade. When we entered the picture, we did not begin with an accusation. We began with preservation. A forensic image of Suspect 1’s workstation was created. Not a casual copy, but abit-by-bit acquisition. The hash value was calculated immediately, our digital oath. The hash value is sacred. Under the Evidence Act (Cap 6), particularly Section 7A, electronic records must be shown to be reliable and unaltered. The integrity of the system that produced them must be established. That means we do not assume but demonstrate. The hash value matched. Exactly. That was our seal. Inside the image, we found a deleted browser history fragment referencing vendor profile edits on dates when no official change request existed. We recovered fragments of an Excel sheet stored in temporary files, an informal tracking tool listing transaction amounts beside initials. More telling was a registry key indicating the installation of remote desktop software. Installed on a Sunday. At 1:47 a.m, people sleep at that hour, except those who believe darkness hides intent. From his mobile device, a deleted WhatsApp database was carved out of unallocated space. Messages between Suspect 1 and Suspect 2, an older relative described in one chat as handling withdrawals. “Keep them small,” one message said. “Small small,” replied the other. Even criminals respect thresholds. The law is not a paragraph but a pulse. Under the Computer Misuse (Amendment) Act, 2022, unauthorized modification of data and electronic fraud are not abstract offenses. They are defined with precision because Parliament understands that today’s thieves do not break doors; they alter databases. And databases remember. The war over one missing hour When the matter reached court, the defense did not argue morality. They attacked the procedure. A sharp advocate, experienced, and calculated. He focused on a single entry in the evidence register. A one-hour window between seizure of the laptop and logging into the evidence locker. One hour unaccounted for, he said. “Can we be sure nothing was altered?” A clever trap. Chain of custody is a tightrope. One slip, and years of work collapse. He likened it to a leaking jerrycan. “If water can enter, how do you know what you are drinking?” But digital evidence is not water; it is binary. We recalculated the hash value of the forensic image in court. It matched the original acquisition hash exactly, bit for bit. If a single file had changed, even a single character, the hash would have transformed. It did not. That hour was procedural, not corruptive. The seal remained intact. Circumstantial evidence suggests, and server logs confirm. We correlated system login times with biometric access records. Suspect 1’s fingerprint opened the office door within minutes of unauthorized vendor edits. CCTV footage showed him alone at his desk during one such session. Mobile money withdrawal timestamps are aligned within twenty-five minutes of internal transfers. Binary does not improvise. From suspicion to certainty. There is a difference between knowing and proving. Knowing is instinct, and proving is structure. The prosecution did not rely on lifestyle changes alone; they did not parade photographs of new houses or boda bodas, but built a sequence. Unauthorized edit, fund transfer, wallet receipt, cash withdrawal, repeat, forty-three cycles, Total loss: UGX 389,750,000, mpora mpora until the pot was empty. Suspect 1 did not wake up intending to steal nearly four hundred million shillings. He likely began with a test, a small one. The first time it worked, nothing happened. And silence is intoxicating, a masterclass in admissibility The Electronic Signature Act provides requirements concerning electronic signatures, and attribution was addressed meticulously. The altered vendor approvals bore digital credentials tied uniquely to Suspect 1’s login. Two-factor authentication SMS records were subpoenaed from the telecom provider, and they matched. In other sections, we established that the company’s accounting system generated logs
Untested security is no security
I learned the hard way on a quiet Tuesday morning when a finance director with a steady voice and tired eyes told me, “Nothing is missing. But something is wrong.” That is how it always begins. Not with alarms screaming or servers collapsing in flames, but with a subtle disturbance in the rhythm of the books, a reconciliation that takes longer than usual, a payment that clears twice, a supplier who calls to say thank you for money they were never owed. The peace of the company was shattered not by noise, but by instinct. I have spent years hunting shadows on the streets and server rooms alike, and I can tell you that the gut is often the first forensic tool. When an experienced accountant says, “It feels wrong,” you do not argue with them; you listen. In Bunyoro, we have a saying: “Engeso embi zikuletera obunaku.” Bad manners bring you misery. And bad security habits bring you ruin. The silent alarm The company was mid-sized, growing, and ambitious. They had invested in a modern ERP system and boasted about their cyber readiness in board reports. There were policies, passwords, and confidence. But there was no independent penetration test, no red-team simulation, no forensic readiness plan. Security was assumed, not tested. Assumption is the cousin of disaster. The anomaly was small, a vendor payment that appeared legitimate, supported by an email thread and an electronic approval. The electronic signature complied, on its face, with the Electronic Signatures Act. It bore the name, the timestamp, and the apparent intent. On paper, it was clean. But the hash value of the invoice attachment did not match the original stored in the procurement system. That tiny string of alphanumeric characters, that sacred fingerprint of digital integrity, had changed. In digital forensics, the hash is our royal seal. If it shifts even by one character, the document is not what it claims to be. Something had been altered. The anatomy of the betrayal We traced the activity to an internal user account. Trusted and senior. The kind of person who attends weddings and burials in the same village as the CEO. Here, betrayal rarely comes wearing a mask. It comes with familiarity. Let us call him Suspect 1. He was not flamboyant at first. He was methodical. Haba na haba. Little by little. He exploited a basic weakness: shared administrative credentials for system updates. No multi-factor authentication, and role segregation. The IT manager believed trust was control, but it is not. From unallocated space on his company-issued laptop, we recovered fragments of a deleted WhatsApp database. The conversation was brief. A supplier account to be created, an invoice template shared, and a commission percentage agreed. The smoking gun was not dramatic; it was clinical. A hidden registry key that allowed remote access software to persist after apparent uninstallation. A scheduled task triggered at 2:17 a.m., when most of Kampala slept under aging roofs and distant boda bodas. Like a rat in the thatch of a grass-patched roof, silent, coiled, invisible until the night you finally try to sleep. Psychologically, the slip came when lifestyle outran salary. A sudden purchase of two boda bodas for relatives, school fees paid in cash, and a plot of land fenced in a matter of weeks. In a typical household, when a trusted worker begins buying things that outpace their known income, elders whisper. They do not accuse, but observe. In corporations, we call them red flags. But they are the same human signals. The law as a living thing When we moved to preserve evidence, the real battle began. The Computer Misuse (Amendment) Act, 2022, is not just ink on paper. It is a living instrument. It recognizes unauthorized access, interference, and misuse of electronic systems as crimes with teeth. But it demands precision. Under the Evidence Act (Cap 6), particularly the provisions governing admissibility of electronic records, the court requires proof that the electronic record was produced by a reliable system, in the ordinary course of use, and that its integrity was maintained. Integrity. That word again. So we imaged the hard drive using write-blockers. We generated MD5 and SHA-256 hash values at acquisition and re-verified them before analysis. Every transfer was logged, every device sealed, and every timestamp synchronized to a trusted time source. Chain of Custody is not paperwork; it is a spine. Break it, and the body collapses. And the defense knew it. The crucible of the courtroom Buganda court is not the National Theatre; it is chess. The defense lawyer was clever and Smooth. He did not deny that suspicious payments occurred but attacked the process. There is a one-hour gap, he said, pointing to the evidence log. “Between 14:00 and 15:00, the device was in transit. Who had it? Where was it stored? Could it have been altered?” One hour. In digital forensics, one unaccounted hour can be portrayed as an eternity. He argued that the forensic image, despite matching hash values, could not be trusted because the physical custody documentation was imperfect. A leaking jerrycan, he suggested, no matter how pure the water, it could not be relied upon. This is where many cases die. Knowing is the brother of guessing. You can suspect, infer, or feel in your bones that Suspect 1 orchestrated the fraud. But proving is the father of justice. Circumstantial evidence, the lifestyle changes, the WhatsApp fragments, and the vendor links painted a compelling picture. But what sealed the case was the server log. Cold, binary, and Unemotional. At 02:17:34, the system recorded an administrative override from his credentials. At 02:18:02, the vendor bank details were changed. At 02:19:11, an invoice PDF was uploaded. Its hash differed from the procurement original by three characters. The system, configured and operating normally, recorded these events automatically. That is what the Evidence Act requires, reliability of the system, not perfection of memory. The court accepted the logs, the hash values held, and the sanctity of digital integrity survived
The breach is normal
What happened is not mysterious. Data left the system, and it moved faster than management reacted. Logs existed, but no one looked at them in time. Backups were running, but no one tested restoration under pressure. A notification clock started ticking the moment the data crossed a boundary it should never have crossed. That is the moment most organisations miss. Breaches are no longer exceptional events but operational facts. The abnormal part is not the intrusion; it is the hesitation that follows. In the cases I see, the entry point is rarely sophisticated. A reused password. An old VPN account. A finance laptop shared with a child for online classes. A cloud storage bucket was set to public because someone needed a file “just for today.” Technology does not fail dramatically. It fails quietly, one small permission at a time. Think of a breach like a hairline crack in a dam. The water does not arrive with a bang. It seeps. By the time you hear the sound, the pressure has already shifted downstream. From an investigative standpoint, the first question is never “who did this?” It is “what moved, when, and under whose authority?” Data always leaves footprints. The problem is that many organisations overwrite them, rotate them away, or never collect them in the first place. In Uganda, once personal data is exposed, the legal posture changes immediately. The Data Protection and Privacy Act, 2019 imposes a duty of security safeguards and timely notification to the regulator and affected data subjects. This is not a courtesy obligation. It is a statutory one. Delay converts a technical incident into a governance failure. At that point, responsibility shifts from IT to the organisation itself. Control rests with management. Liability follows control. This is where many leaders misstep. They treat the incident as an IT clean-up exercise. It is not, my dears. It is an evidence preservation exercise with regulatory consequences. The moment a breach is suspected, routine system activity becomes risky. Auto-patching, log rotation, account resets, and even well-intended “hardening” can destroy the very artefacts needed to establish what happened. In several verified cases across the region, organisations lost the ability to defend themselves, not because the breach was severe, but because internal teams wiped volatile data before forensic capture. Courts and regulators do not speculate in your favour when the evidence is gone. Silence created by poor handling is interpreted as concealment or incompetence. Neither helps. Technology matters here, but only if you understand it at a granular level. Logs are not one thing. There are authentication logs, application logs, database transaction logs, API gateway logs, cloud audit trails, endpoint artefacts, and mobile device caches. Each tells a different story. A login at 02:14 means nothing unless you correlate it with file access at 02:16 and outbound traffic at 02:18. Breach timelines are built minute by minute, not headline by headline. I often tell boards that if money goes missing from a vault, you do not repaint the walls first. You freeze the scene. You count. Check physical access controls, cameras, who knows what, etc. You trace serial numbers. Data breaches are no different. Yet organisations rush to “fix” systems before they understand the loss. That instinct feels responsible. It is legally dangerous. Another common blind spot is third-party exposure. Most breaches today pass through someone you trusted. A payroll processor. A CRM vendor. A marketing agency with API access. Contracts often mention security in vague language, but rarely specify log access, incident cooperation timelines, or evidence retention duties. When the breach happens, you discover too late that you do not control the records you need. Responsibility still sits with you. There is also a human layer that investigators watch closely. Internal messages after the incident. Who knew what, and when. Who decided not to escalate? Who used casual language in an email that later reads like indifference? Regulators read those messages. So do courts. Tone becomes evidence. Winning organisations accept a hard truth early: prevention reduces frequency, not certainty. The real differentiator is readiness. That means having an incident protocol that starts with evidence, not blame. A legal hold that is triggered automatically. A named decision-maker who understands when to stop system changes. Pre-agreed notification thresholds. External forensic capability on standby, not negotiated in panic. The breach itself is not the scandal. The response is. Every serious investigation I have handled ends the same way. The technology tells a clear story once it is respected. The law is predictable once obligations are understood. What remains unpredictable is leadership under pressure. If data is already out, the question is no longer how to avoid the incident. It is whether you will handle it in a way that causes damage or multiplies it. At that point, control is no longer theoretical. It is yours to exercise, or to lose. Copyright IFIS 2026. All rights reserved.
Data, crime, and code collide
What happened is simple. A system recorded activity that did not belong there. Data moved out of its normal path, code executed in a way the organisation did not authorise, no alarms rang loudly enough, and by the time someone noticed, the trail was already cooling. This is where modern crime now lives, not in back rooms or dark alleys, but inside logs, databases, APIs, and forgotten admin panels. When data, crime, and code collide, the first mistake leaders make is to treat it as a technical glitch. That is wrong because it is not. It is a crime scene made of instructions, timestamps, and digital residue. And like any crime scene, the earliest actions determine whether truth can be established or permanently lost. In one verified local enforcement matter involving unauthorised access to customer records at a financial institution, the intrusion itself was minor. The regulatory damage came later. Internal teams reset systems, rotated logs, and cleaned up before preserving evidence. By the time investigators arrived, the story could no longer be reconstructed with certainty. The regulator did not need to prove intent. Failure to safeguard and preserve was enough. This is the hard lesson. Code executes in milliseconds, and the Law moves more slowly. But the law expects you to respect those milliseconds. That is why students at the Institute of Forensics & ICT Security are first taught about preservation, legal hold, and chain of custody. Preserving critical evidence is an important phase in an investigation. At the technical level, most incidents unfold in small, boring steps. A credential is harvested, not hacked. A login succeeds because the system allows it. A query runs because access rights permit it. A file downloads because no one limited export volume. Crime here does not break doors; it walks through unlocked ones. Investigators rebuild these moments minute by minute. 09:41:22, a successful login from a new IP. 09:42:10, a database read is heavier than normal. 09:43:55, outbound traffic spikes. Each event alone is explainable. Together, they form intent, and that is what you need to answer the “who”, “what”, “why”, and “how” investigation questions. That is why logs are evidence, not diagnostics. Authentication records, query histories, endpoint artefacts, cloud audit trails, and email headers. These are sworn witnesses, not housekeeping tools. Once altered, they cannot testify. From a Ugandan legal perspective, once personal or confidential data is accessed unlawfully, several obligations crystallise immediately. The Data Protection and Privacy Act requires reasonable security safeguards, incident containment, and notification where there is a real risk of harm. The Computer Misuse Act criminalises unauthorised access and interference. Directors and officers are judged not on perfection, but on reasonableness and timeliness. This is where leadership exposure begins. Courts and regulators ask predictable questions. When did you know? What did you do next? Who decided? What records show that decision? Silence or delay is rarely neutral. It is read as a loss of control. Technology teams often want to fix first, as Lawyers want to freeze first. Investigators insist on seeing before touching. The correct sequence matters. Patch too early and you destroy volatile memory. Reset accounts too fast, and you erase lateral movement evidence. Restore backups before imaging systems, and you overwrite history with comfort. If a bank ledger is suspected of being manipulated, you do not rebalance accounts before copying the books. Digital systems deserve the same discipline. Another collision point is third parties. Modern code ecosystems are porous by design. Payment gateways, analytics tools, HR platforms, marketing software. Each connection is a legal relationship and a technical dependency. In a verified East African case involving a breached service provider, the primary institution was still held accountable because contracts did not guarantee access to forensic records. Responsibility followed custody of data, not blame. High digital trust organisations design for this reality. They assume incidents will happen. They predefine who can halt system changes. They implement legal holds that trigger automatically. They test incident response the way banks test liquidity stress. Calm, rehearsed, documented. Most importantly, they respect that data is not abstract. It represents people. Customers. Staff. Citizens. Courts and regulators anchor decisions on that human impact, not on how advanced your firewall was. When data, crime, and code collide, the technology will always speak. The question is whether you preserved its voice. In these moments, outcomes are shaped less by what the attacker did and more by how the organisation responded in the first hour. That hour decides whether the event remains a contained incident or becomes a lasting liability. Copyright IFIS 2026. All rights reserved.
Donor trust is currency: Managing Fraud risk in NGOs
Donor trust is currency. Lose it, and the organisation bleeds quietly, long before the scandal reaches the papers. What happened was not dramatic. No midnight raids, no handcuffs. Just a routine donor review in Kampala that asked for transaction support behind a “capacity-building” line item. The documents arrived late. The numbers reconciled; barely. The underlying evidence did not. Mobile money confirmations without counterpart invoices. Shared email accounts approving payments. A finance officer who “kept passwords to help the team.” By the time lawyers were called, the house was not in order. Management was on alert; transactions were not balancing. This is how NGO fraud usually begins in Uganda. Not with villains, but with convenience. I have investigated NGOs long enough to tell you this. Donors do not lose trust because money disappears; they lose trust because leaders cannot explain, provide evidence, or control how money moves. Fraud is a governance failure first. Theft is the last chapter. People steal because they know they will not be caught, and if caught, will not be prosecuted, and if prosecuted, the punishment will be lighter than the benefits. If you defraud, say Ugx 100m, the punishment is worth say Ugx 20m, and the net benefit of the fraud is Ugx. 80m! That is a huge ROI. That is what I mean by governance failure – where fraud pays more than integrity. Let’s be precise. The legal frame you are operating in is not vague. The Penal Code Act criminalises false accounting and theft. The Anti-Money Laundering Act imposes duties on suspicious transaction monitoring. The NGO Act places fiduciary responsibility squarely on directors and senior management. When donor funds are involved, contractual obligations tighten the noose. Breach is not academic; it is actionable. Once a red flag is raised, preservation of evidence becomes a legal duty, not an IT chore. This is where many NGOs step on the rake. Evidence is perishable. Emails auto-delete. WhatsApp chats vanish with phone upgrades. Cloud logs roll over. The moment suspicion arises, litigation readiness matters. In the language of digital forensics, made practical by forensic experts like yours truly, you must preserve, collect, and review defensibly. Delay is spoliation. Spoliation turns a manageable investigation into an adverse inference. Courts do not reward organisations that meant well. Here is the reality of how the money moved in the cases I see most. A program manager fronts field expenses in cash because the area is remote. Reimbursements follow on mobile money, approved by a supervisor who shares an inbox with finance. Receipts are photographed, not originals. The vendor is a cousin’s trading name. Splits keep transactions under approval thresholds. Month-end journals “true up” variances. The audit trail looks tidy. The substance is rotten. Technology did not cause this. Weak design did. Winning NGOs design controls around behaviour, not policy binders. Start with segregation that survives staff shortages. No single person should initiate, approve, and reconcile ever. Where teams are lean, use system controls, not trust. Maker-checker enforced by software. Time-stamped approvals tied to individual credentials. Two-factor authentication for finance platforms. Read-only donor dashboards. These are not luxuries, but they are hygiene. Data analytics is not a buzzword here; it is a flashlight. Simple tests catch most schemes: round-amount analysis, weekend and after-hours payments, duplicate vendors with shared phone numbers, split transactions just below limits, reimbursements without GPS-consistent field reports. These are not PhD tools; they are a discipline. They are simple tests that an experienced eye focuses on. Culture matters, but not in the way posters suggest. Staff copy what leaders tolerate. If executives override controls to move faster, you have trained the organisation to cheat politely. Tone at the top is not a speech. It is whether exceptions are documented, reviewed, and rare. When suspicion surfaces, the sequence matters. Freeze changes, preserve data, appoint independent counsel, define scope, and interview quietly. Do not broadcast, threaten, or promise outcomes. Keep a clean chain of custody. Remember: once you investigate, you own the findings. If you bury them, they will resurface, usually in a donor report written by someone else. A metaphor I use with boards is the water meter. You do not wait for drought to check consumption. You watch flow daily. Spikes tell stories. Silence tells lies. Donor funds are no different. Visibility beats virtue. Boards should ask only three questions, repeatedly. Where does the money enter? Who touches it, and when? How do we know: today, not at year-end, that controls worked? If management cannot answer without adjectives, you have a problem. The hardest truth for NGOs is that intent does not mitigate risk. Impact does not excuse weak controls. Courts and donors care about evidence. The ball is in your court to design systems that make wrongdoing hard, detection fast, and explanations boring. Trust is currency. Spend it on controls. Earn it with proof. Copyright Institute of Forensics & ICT Security, 2026. All rights reserved.
Click, Pay, Wait…Never receive: The rise of online retail scams
It starts with a screenshot. A neat Instagram shopfront with clean product photos. Delivery in 24 hours. A WhatsApp number with a Kampala profile picture. You pay by mobile money because the seller says the card machine is down, and the discount is today only. They reply fast until the payment hits. Then the chat slows. Tomorrow becomes next week, next week becomes silence. When you call, the line is off. When you check the page, it has a new name, a new logo, and the same products. Uganda Police has been blunt about this pattern: non-delivery of paid goods and services found online, where the items are never received. This is not bad customer service; it is a fraudulent design. You, the customer, are your best cybersecurity first line of defence. But you need basic cyberhygiene to escape these online fraud schemes. The scheme is small, fast, and built for low reporting. It rarely needs hacking; it needs your impatience. Minute case 1: The moving shop on social media The scammer runs short-lived pages. They boost posts for a few days, harvest payments, then rebrand and repeat. The trick is not the product; the trick is the identity. Many victims focus on where is my item? The investigator focuses on who received the money, and where the money go next. The key evidence is boring but decisive: the transaction reference, recipient number, chat logs, delivery promises, and the page metadata. If you lose those, you weaken your case. Minute case 2: The split-payment trap You are told to pay a deposit first, then balance on delivery, then fuel top-up, then “packaging. Each micro-payment is designed to feel recoverable. Together, they become a meaningful loss. This is behavioural fraud: small commitments stacked into obedience. Minute case 3: The false courier confirmation A fake rider calls you and reads your own details back to you to sound legitimate. They claim the package is at the stage, but you must pay for clearance. In reality, there is no rider but a second phone controlled by the same person or a collaborator. Technology makes this easy. Caller ID can be spoofed. NITA-U warns consumers not to trust the caller ID at face value because it can be faked. Minute case 4: The OTP harvest disguised as delivery The seller says you must confirm using an OTP to release the parcel. That OTP is actually for your wallet, your bank app, or your payment account. Uganda’s regulator community has repeatedly warned the public not to share PINs or verification codes. This is the point where online retail fraud blends into account takeover. The legal position: you are your best cybersecurity defender. Many victims want the police to recover the money quickly. That is not how criminal law works. Uganda Police has stressed that money-related offences such as obtaining by false pretences and theft are reportable crimes, but the police is not a debt collection agency. Your complaint must be framed as an offence, backed by evidence, not as a customer service dispute. Two Ugandan statutes matter immediately. Under the Computer Misuse Act, electronic fraud is an offence with significant penalties on conviction, and the Act also addresses conduct such as unauthorised disclosure of access codes. Under the Electronic Transactions Act, the law sets consumer-protection expectations for e-commerce, including requirements around information presented to consumers and remedies such as cancellation and refund in defined circumstances. The practical reality is that your strongest leverage is not moral outrage. It is your evidence package. Evidence: treat your phone like a crime scene. Most victims destroy their own case while trying to talk sense into the scammer. If you want a real chance at action, do what litigators and e-discovery experts insist on: preserve first, then pursue. Mr Strategy of Summit Consulting Ltd’s work is focused on the preservation duty mindset: prompt, proper preservation steps matter, and a preservation letter to the other side is different from an internal legal hold notice. That means you stop negotiating in circles and you lock down proof: Keep the full chat history, do not delete messages, export the chat if possible, screenshot the page, the posts, the promises, and the comments. Include dates and handles. Record the payment details: transaction ID, recipient number, names displayed, and timestamps. Capture delivery claims: dispatch note, tracking, rider number. If you clicked a link, keep the link. Do not clean up your browser history. Then report through credible channels. NITA-U points consumers to a reporting channel for online transaction scams. If the scam used a bank or telecom rail, notify the provider immediately. Timing matters because fraud proceeds move fast. Why is this rising now? Online retail scams thrive when three conditions exist: low-friction payments, weak identity assurance, and social media trust shortcuts. Uganda has fast payment rails and high social commerce usage. That is a growth engine and a crime engine in the same breath. Media reporting has also highlighted the broader rise of online scams and their reputational impact. Regulators have responded in adjacent areas, too. The Financial Intelligence Authority has issued public warnings about fraudulent websites that impersonate institutions and solicit payments. The pattern is the same: false legitimacy plus payment pressure. What changes the game? Boards and executives often misread online retail fraud as a consumer problem. It is now a brand risk and a payment-ecosystem problem. The winners will build proof into the transaction. If you run a legitimate online retail operation, the control standard is rising. The Uganda Communications Commission has publicly cautioned e-commerce firms in the consumer context, signalling that scrutiny is not theoretical. Operationally, serious platforms will do four things. They will reduce pay-first to strangers’ risk by using escrow-like settlement or payment-on-delivery with verified riders and verifiable tracking. They will bind identity to transaction using stronger KYC signals, device reputation, and consistent merchant identifiers, not just a phone number and a logo. They will retain logs: order creation, edits, messages, IP/device data where lawful, and
Your phone, their ATM: the dark side of mobile money convenience
Money left a corporate account without malware, without a breached firewall. It moved because the system accepted it. The phone in an employee’s hand behaved like an ATM that never sleeps. Between 2015 and 2018, more than UGX 2.2 billion was drained through a bank’s online platform. Transactions were initiated, verified, and approved inside the customer’s own environment. Some instructions carried mismatched account names and numbers. The bank processed them, but the customer did not catch them in time. When the case reached court, the question was not whether fraud occurred; it was who had the last clear chance to stop it. That question put the ball squarely in both courts. How this kind of fraud actually works An employee logs in from a known device. No alert. A payment file is uploaded, the account number is correct, and the account name is close enough to pass a human glance. The same employee verifies the transaction. Segregation exists on paper, not in practice. Approval is granted, and the platform allows it because the roles were never properly split. A few minutes later, funds leave the account. Proceeds are split into smaller transfers; some to bank accounts, some to mobile money, and some to intermediaries. The trail cools, no hacking, no brute force. Just speed, familiarity, and silence. This is why mobile money and online banking fraud scale. It does not attack systems. It uses permissions. The case that comes to mind… In Abacus Parenteral Drugs Ltd v. Stanbic Bank (U) Ltd, decided on 9 April 2025, the High Court of Uganda refined a principle first set out in Aida Atiku v. Centenary Rural Development Bank Ltd. The Court said something many practitioners know but few contracts admit: fraud prevention in digital banking is shared. Not abstractly. Practically and contractually. The bank argued its platform was customer-controlled. Initiation, verification, and approval sat with the client. The customer argued the bank breached its own agreement by processing instructions with obvious discrepancies, including mismatched account details. The Court agreed with both partly. What the Court actually did First, it confirmed the relationship is contractual. Every clause matters. If the bank agrees to reject erroneous instructions, that duty is enforceable. If the customer agrees to segregate duties and protect credentials, that duty is also enforceable. Second, it looked at control. Who was best positioned, at each point, to stop the fraud? The customer failed to maintain basic internal controls. One officer could initiate and approve, but account activity was not monitored, and security protocols were breached. On that basis, the customer carried the larger share of blame. But the bank was not excused. Processing transactions where the account name did not match the account number was a red flag. Failing to act on that was a breach of duty. The result was apportionment. Of the UGX 1,698,000,000 proven loss, the bank paid 20 percent. The customer carried 80 percent. That split matters. It signals how courts will think going forward. The legal signal regulators and bankers should not miss This decision extends the Atiku principle to corporate clients. Liability follows comparative negligence. Courts will ask a simple, uncomfortable question: who had the last clear chance to prevent the loss? Limitation of liability clauses will not save you if they are vague. Ambiguity cuts against the drafter. If a clause does not clearly describe what is excluded and why, expect it to fail. Most importantly, authorization is not the same as safety. A transaction can be valid in form and defective in substance. When banks ignore obvious discrepancies, the ball comes back into their court. Technology, without romance Banks like to say platforms are customer-controlled. That is only half true. Banks design the rails. They set tolerance levels, decide whether name-number mismatches hard-stop or soft-pass, and choose whether overrides trigger alerts or logs that no one reads. Customers, on the other hand, control access, who has tokens, who approves, and who can act alone. When segregation collapses, the system will not rescue you. In this case, technology did exactly what it was configured to do. That is the problem. What corporate clients must change immediately? One person must never initiate and approve. Not because policy says so, but because courts now expect it. Account reviews must be daily, not monthly. If you cannot explain a transaction within 24 hours, you do not control it. Credentials are not administrative details but legal liabilities. When an employee acts with your access, the law treats it as your act unless you can prove otherwise. If you do not know what your online banking agreement requires of you, assume it requires more than you are doing. What banks must stop pretending Fraud detection is not optional support; it is a contractual duty once promised. Name-number mismatches are not clerical issues. They are warning signs. Platforms that rely entirely on customer discipline will fail in court if obvious red flags are ignored. When banks have data that customers do not, silence becomes negligence. Your phone can function like an ATM for you, or for someone who understands your routines better than you do. Digital convenience has shifted risk, not removed it. Courts are responding by reallocating responsibility to whoever could have acted sooner. In this landscape, fraud prevention is not a slogan. It is evidence. Logs, controls, and decisions made in time. If fraud passes through your system, the law will ask where the ball was and why you did not pick it up. Copyright Summit Consulting Ltd 2026, All rights reserved.