It was a Thursday evening. The kind of evening when the city exhales, traffic thins, and executives convince themselves that the week has behaved. Then a number refused to behave.UGX 6,200,000. It appeared in the payables reconciliation as a timing difference. Not a loss. Not yet. Just a variance that would clear next cycle, but it did not clear; it lingered. And in my line of work, lingering numbers are like a faint smell of smoke in a grass-thatched house. You do not see flames, but you know something is burning. There was no dramatic alarm and no shouting across corridors. Only a quiet tightening in the chest of a finance manager who had seen enough storms to recognize the first drop of rain. Mpora mpora. Small small. Little by little. That is how the ledger began to empty. The trusted hand that fed from the pot. He was neither the loudest nor most flamboyant in the office. We will call him Suspect 1. Medium build, soft-spoken, efficient, and the kind of man you trust to close the office because he always remembers to switch off the lights. He had system access, had delegated authority, and he had proximity to vendor onboarding. And he had discovered something. Below UGX 5 million, supplier bank detail changes did not require a second approval. It was a legacy configuration, installed years earlier when the company was smaller, and trust felt cheaper. So he created what I call mirror vendors, same names, and slight spelling differences, a misplaced letter, and an added space, invisible to a casual eye. Then he waited. Payments due to legitimate suppliers were intercepted at the last minute. Bank details altered. Fundswere rerouted to mobile money wallets registered under national IDs belonging to relatives in distant districts, withdrawn in cash, reassembled elsewhere, and invested quietly. Not one large theft. Never dramatic. Always beneath the threshold. Mpora mpora. In any village, when a trusted worker suddenly begins building rental rooms or purchasing boda bodas for family use, elders whisper. In corporations, we do not whisper; we rationalize. “He must have side businesses.” “He is hardworking.” But money speaks. And money, when interrogated properly, never lies. Digital footprints do not fade. When we entered the picture, we did not begin with an accusation. We began with preservation. A forensic image of Suspect 1’s workstation was created. Not a casual copy, but abit-by-bit acquisition. The hash value was calculated immediately, our digital oath. The hash value is sacred. Under the Evidence Act (Cap 6), particularly Section 7A, electronic records must be shown to be reliable and unaltered. The integrity of the system that produced them must be established. That means we do not assume but demonstrate. The hash value matched. Exactly. That was our seal. Inside the image, we found a deleted browser history fragment referencing vendor profile edits on dates when no official change request existed. We recovered fragments of an Excel sheet stored in temporary files, an informal tracking tool listing transaction amounts beside initials. More telling was a registry key indicating the installation of remote desktop software. Installed on a Sunday. At 1:47 a.m, people sleep at that hour, except those who believe darkness hides intent. From his mobile device, a deleted WhatsApp database was carved out of unallocated space. Messages between Suspect 1 and Suspect 2, an older relative described in one chat as handling withdrawals. “Keep them small,” one message said. “Small small,” replied the other. Even criminals respect thresholds. The law is not a paragraph but a pulse. Under the Computer Misuse (Amendment) Act, 2022, unauthorized modification of data and electronic fraud are not abstract offenses. They are defined with precision because Parliament understands that today’s thieves do not break doors; they alter databases. And databases remember. The war over one missing hour When the matter reached court, the defense did not argue morality. They attacked the procedure. A sharp advocate, experienced, and calculated. He focused on a single entry in the evidence register. A one-hour window between seizure of the laptop and logging into the evidence locker. One hour unaccounted for, he said. “Can we be sure nothing was altered?” A clever trap. Chain of custody is a tightrope. One slip, and years of work collapse. He likened it to a leaking jerrycan. “If water can enter, how do you know what you are drinking?” But digital evidence is not water; it is binary. We recalculated the hash value of the forensic image in court. It matched the original acquisition hash exactly, bit for bit. If a single file had changed, even a single character, the hash would have transformed. It did not. That hour was procedural, not corruptive. The seal remained intact. Circumstantial evidence suggests, and server logs confirm. We correlated system login times with biometric access records. Suspect 1’s fingerprint opened the office door within minutes of unauthorized vendor edits. CCTV footage showed him alone at his desk during one such session. Mobile money withdrawal timestamps are aligned within twenty-five minutes of internal transfers. Binary does not improvise. From suspicion to certainty. There is a difference between knowing and proving. Knowing is instinct, and proving is structure. The prosecution did not rely on lifestyle changes alone; they did not parade photographs of new houses or boda bodas, but built a sequence. Unauthorized edit, fund transfer, wallet receipt, cash withdrawal, repeat, forty-three cycles, Total loss: UGX 389,750,000, mpora mpora until the pot was empty. Suspect 1 did not wake up intending to steal nearly four hundred million shillings. He likely began with a test, a small one. The first time it worked, nothing happened. And silence is intoxicating, a masterclass in admissibility The Electronic Signature Act provides requirements concerning electronic signatures, and attribution was addressed meticulously. The altered vendor approvals bore digital credentials tied uniquely to Suspect 1’s login. Two-factor authentication SMS records were subpoenaed from the telecom provider, and they matched. In other sections, we established that the company’s accounting system generated logs
Untested security is no security
I learned the hard way on a quiet Tuesday morning when a finance director with a steady voice and tired eyes told me, “Nothing is missing. But something is wrong.” That is how it always begins. Not with alarms screaming or servers collapsing in flames, but with a subtle disturbance in the rhythm of the books, a reconciliation that takes longer than usual, a payment that clears twice, a supplier who calls to say thank you for money they were never owed. The peace of the company was shattered not by noise, but by instinct. I have spent years hunting shadows on the streets and server rooms alike, and I can tell you that the gut is often the first forensic tool. When an experienced accountant says, “It feels wrong,” you do not argue with them; you listen. In Bunyoro, we have a saying: “Engeso embi zikuletera obunaku.” Bad manners bring you misery. And bad security habits bring you ruin. The silent alarm The company was mid-sized, growing, and ambitious. They had invested in a modern ERP system and boasted about their cyber readiness in board reports. There were policies, passwords, and confidence. But there was no independent penetration test, no red-team simulation, no forensic readiness plan. Security was assumed, not tested. Assumption is the cousin of disaster. The anomaly was small, a vendor payment that appeared legitimate, supported by an email thread and an electronic approval. The electronic signature complied, on its face, with the Electronic Signatures Act. It bore the name, the timestamp, and the apparent intent. On paper, it was clean. But the hash value of the invoice attachment did not match the original stored in the procurement system. That tiny string of alphanumeric characters, that sacred fingerprint of digital integrity, had changed. In digital forensics, the hash is our royal seal. If it shifts even by one character, the document is not what it claims to be. Something had been altered. The anatomy of the betrayal We traced the activity to an internal user account. Trusted and senior. The kind of person who attends weddings and burials in the same village as the CEO. Here, betrayal rarely comes wearing a mask. It comes with familiarity. Let us call him Suspect 1. He was not flamboyant at first. He was methodical. Haba na haba. Little by little. He exploited a basic weakness: shared administrative credentials for system updates. No multi-factor authentication, and role segregation. The IT manager believed trust was control, but it is not. From unallocated space on his company-issued laptop, we recovered fragments of a deleted WhatsApp database. The conversation was brief. A supplier account to be created, an invoice template shared, and a commission percentage agreed. The smoking gun was not dramatic; it was clinical. A hidden registry key that allowed remote access software to persist after apparent uninstallation. A scheduled task triggered at 2:17 a.m., when most of Kampala slept under aging roofs and distant boda bodas. Like a rat in the thatch of a grass-patched roof, silent, coiled, invisible until the night you finally try to sleep. Psychologically, the slip came when lifestyle outran salary. A sudden purchase of two boda bodas for relatives, school fees paid in cash, and a plot of land fenced in a matter of weeks. In a typical household, when a trusted worker begins buying things that outpace their known income, elders whisper. They do not accuse, but observe. In corporations, we call them red flags. But they are the same human signals. The law as a living thing When we moved to preserve evidence, the real battle began. The Computer Misuse (Amendment) Act, 2022, is not just ink on paper. It is a living instrument. It recognizes unauthorized access, interference, and misuse of electronic systems as crimes with teeth. But it demands precision. Under the Evidence Act (Cap 6), particularly the provisions governing admissibility of electronic records, the court requires proof that the electronic record was produced by a reliable system, in the ordinary course of use, and that its integrity was maintained. Integrity. That word again. So we imaged the hard drive using write-blockers. We generated MD5 and SHA-256 hash values at acquisition and re-verified them before analysis. Every transfer was logged, every device sealed, and every timestamp synchronized to a trusted time source. Chain of Custody is not paperwork; it is a spine. Break it, and the body collapses. And the defense knew it. The crucible of the courtroom Buganda court is not the National Theatre; it is chess. The defense lawyer was clever and Smooth. He did not deny that suspicious payments occurred but attacked the process. There is a one-hour gap, he said, pointing to the evidence log. “Between 14:00 and 15:00, the device was in transit. Who had it? Where was it stored? Could it have been altered?” One hour. In digital forensics, one unaccounted hour can be portrayed as an eternity. He argued that the forensic image, despite matching hash values, could not be trusted because the physical custody documentation was imperfect. A leaking jerrycan, he suggested, no matter how pure the water, it could not be relied upon. This is where many cases die. Knowing is the brother of guessing. You can suspect, infer, or feel in your bones that Suspect 1 orchestrated the fraud. But proving is the father of justice. Circumstantial evidence, the lifestyle changes, the WhatsApp fragments, and the vendor links painted a compelling picture. But what sealed the case was the server log. Cold, binary, and Unemotional. At 02:17:34, the system recorded an administrative override from his credentials. At 02:18:02, the vendor bank details were changed. At 02:19:11, an invoice PDF was uploaded. Its hash differed from the procurement original by three characters. The system, configured and operating normally, recorded these events automatically. That is what the Evidence Act requires, reliability of the system, not perfection of memory. The court accepted the logs, the hash values held, and the sanctity of digital integrity survived
The breach is normal
What happened is not mysterious. Data left the system, and it moved faster than management reacted. Logs existed, but no one looked at them in time. Backups were running, but no one tested restoration under pressure. A notification clock started ticking the moment the data crossed a boundary it should never have crossed. That is the moment most organisations miss. Breaches are no longer exceptional events but operational facts. The abnormal part is not the intrusion; it is the hesitation that follows. In the cases I see, the entry point is rarely sophisticated. A reused password. An old VPN account. A finance laptop shared with a child for online classes. A cloud storage bucket was set to public because someone needed a file “just for today.” Technology does not fail dramatically. It fails quietly, one small permission at a time. Think of a breach like a hairline crack in a dam. The water does not arrive with a bang. It seeps. By the time you hear the sound, the pressure has already shifted downstream. From an investigative standpoint, the first question is never “who did this?” It is “what moved, when, and under whose authority?” Data always leaves footprints. The problem is that many organisations overwrite them, rotate them away, or never collect them in the first place. In Uganda, once personal data is exposed, the legal posture changes immediately. The Data Protection and Privacy Act, 2019 imposes a duty of security safeguards and timely notification to the regulator and affected data subjects. This is not a courtesy obligation. It is a statutory one. Delay converts a technical incident into a governance failure. At that point, responsibility shifts from IT to the organisation itself. Control rests with management. Liability follows control. This is where many leaders misstep. They treat the incident as an IT clean-up exercise. It is not, my dears. It is an evidence preservation exercise with regulatory consequences. The moment a breach is suspected, routine system activity becomes risky. Auto-patching, log rotation, account resets, and even well-intended “hardening” can destroy the very artefacts needed to establish what happened. In several verified cases across the region, organisations lost the ability to defend themselves, not because the breach was severe, but because internal teams wiped volatile data before forensic capture. Courts and regulators do not speculate in your favour when the evidence is gone. Silence created by poor handling is interpreted as concealment or incompetence. Neither helps. Technology matters here, but only if you understand it at a granular level. Logs are not one thing. There are authentication logs, application logs, database transaction logs, API gateway logs, cloud audit trails, endpoint artefacts, and mobile device caches. Each tells a different story. A login at 02:14 means nothing unless you correlate it with file access at 02:16 and outbound traffic at 02:18. Breach timelines are built minute by minute, not headline by headline. I often tell boards that if money goes missing from a vault, you do not repaint the walls first. You freeze the scene. You count. Check physical access controls, cameras, who knows what, etc. You trace serial numbers. Data breaches are no different. Yet organisations rush to “fix” systems before they understand the loss. That instinct feels responsible. It is legally dangerous. Another common blind spot is third-party exposure. Most breaches today pass through someone you trusted. A payroll processor. A CRM vendor. A marketing agency with API access. Contracts often mention security in vague language, but rarely specify log access, incident cooperation timelines, or evidence retention duties. When the breach happens, you discover too late that you do not control the records you need. Responsibility still sits with you. There is also a human layer that investigators watch closely. Internal messages after the incident. Who knew what, and when. Who decided not to escalate? Who used casual language in an email that later reads like indifference? Regulators read those messages. So do courts. Tone becomes evidence. Winning organisations accept a hard truth early: prevention reduces frequency, not certainty. The real differentiator is readiness. That means having an incident protocol that starts with evidence, not blame. A legal hold that is triggered automatically. A named decision-maker who understands when to stop system changes. Pre-agreed notification thresholds. External forensic capability on standby, not negotiated in panic. The breach itself is not the scandal. The response is. Every serious investigation I have handled ends the same way. The technology tells a clear story once it is respected. The law is predictable once obligations are understood. What remains unpredictable is leadership under pressure. If data is already out, the question is no longer how to avoid the incident. It is whether you will handle it in a way that causes damage or multiplies it. At that point, control is no longer theoretical. It is yours to exercise, or to lose. Copyright IFIS 2026. All rights reserved.
Data, crime, and code collide
What happened is simple. A system recorded activity that did not belong there. Data moved out of its normal path, code executed in a way the organisation did not authorise, no alarms rang loudly enough, and by the time someone noticed, the trail was already cooling. This is where modern crime now lives, not in back rooms or dark alleys, but inside logs, databases, APIs, and forgotten admin panels. When data, crime, and code collide, the first mistake leaders make is to treat it as a technical glitch. That is wrong because it is not. It is a crime scene made of instructions, timestamps, and digital residue. And like any crime scene, the earliest actions determine whether truth can be established or permanently lost. In one verified local enforcement matter involving unauthorised access to customer records at a financial institution, the intrusion itself was minor. The regulatory damage came later. Internal teams reset systems, rotated logs, and cleaned up before preserving evidence. By the time investigators arrived, the story could no longer be reconstructed with certainty. The regulator did not need to prove intent. Failure to safeguard and preserve was enough. This is the hard lesson. Code executes in milliseconds, and the Law moves more slowly. But the law expects you to respect those milliseconds. That is why students at the Institute of Forensics & ICT Security are first taught about preservation, legal hold, and chain of custody. Preserving critical evidence is an important phase in an investigation. At the technical level, most incidents unfold in small, boring steps. A credential is harvested, not hacked. A login succeeds because the system allows it. A query runs because access rights permit it. A file downloads because no one limited export volume. Crime here does not break doors; it walks through unlocked ones. Investigators rebuild these moments minute by minute. 09:41:22, a successful login from a new IP. 09:42:10, a database read is heavier than normal. 09:43:55, outbound traffic spikes. Each event alone is explainable. Together, they form intent, and that is what you need to answer the “who”, “what”, “why”, and “how” investigation questions. That is why logs are evidence, not diagnostics. Authentication records, query histories, endpoint artefacts, cloud audit trails, and email headers. These are sworn witnesses, not housekeeping tools. Once altered, they cannot testify. From a Ugandan legal perspective, once personal or confidential data is accessed unlawfully, several obligations crystallise immediately. The Data Protection and Privacy Act requires reasonable security safeguards, incident containment, and notification where there is a real risk of harm. The Computer Misuse Act criminalises unauthorised access and interference. Directors and officers are judged not on perfection, but on reasonableness and timeliness. This is where leadership exposure begins. Courts and regulators ask predictable questions. When did you know? What did you do next? Who decided? What records show that decision? Silence or delay is rarely neutral. It is read as a loss of control. Technology teams often want to fix first, as Lawyers want to freeze first. Investigators insist on seeing before touching. The correct sequence matters. Patch too early and you destroy volatile memory. Reset accounts too fast, and you erase lateral movement evidence. Restore backups before imaging systems, and you overwrite history with comfort. If a bank ledger is suspected of being manipulated, you do not rebalance accounts before copying the books. Digital systems deserve the same discipline. Another collision point is third parties. Modern code ecosystems are porous by design. Payment gateways, analytics tools, HR platforms, marketing software. Each connection is a legal relationship and a technical dependency. In a verified East African case involving a breached service provider, the primary institution was still held accountable because contracts did not guarantee access to forensic records. Responsibility followed custody of data, not blame. High digital trust organisations design for this reality. They assume incidents will happen. They predefine who can halt system changes. They implement legal holds that trigger automatically. They test incident response the way banks test liquidity stress. Calm, rehearsed, documented. Most importantly, they respect that data is not abstract. It represents people. Customers. Staff. Citizens. Courts and regulators anchor decisions on that human impact, not on how advanced your firewall was. When data, crime, and code collide, the technology will always speak. The question is whether you preserved its voice. In these moments, outcomes are shaped less by what the attacker did and more by how the organisation responded in the first hour. That hour decides whether the event remains a contained incident or becomes a lasting liability. Copyright IFIS 2026. All rights reserved.
Donor trust is currency: Managing Fraud risk in NGOs
Donor trust is currency. Lose it, and the organisation bleeds quietly, long before the scandal reaches the papers. What happened was not dramatic. No midnight raids, no handcuffs. Just a routine donor review in Kampala that asked for transaction support behind a “capacity-building” line item. The documents arrived late. The numbers reconciled; barely. The underlying evidence did not. Mobile money confirmations without counterpart invoices. Shared email accounts approving payments. A finance officer who “kept passwords to help the team.” By the time lawyers were called, the house was not in order. Management was on alert; transactions were not balancing. This is how NGO fraud usually begins in Uganda. Not with villains, but with convenience. I have investigated NGOs long enough to tell you this. Donors do not lose trust because money disappears; they lose trust because leaders cannot explain, provide evidence, or control how money moves. Fraud is a governance failure first. Theft is the last chapter. People steal because they know they will not be caught, and if caught, will not be prosecuted, and if prosecuted, the punishment will be lighter than the benefits. If you defraud, say Ugx 100m, the punishment is worth say Ugx 20m, and the net benefit of the fraud is Ugx. 80m! That is a huge ROI. That is what I mean by governance failure – where fraud pays more than integrity. Let’s be precise. The legal frame you are operating in is not vague. The Penal Code Act criminalises false accounting and theft. The Anti-Money Laundering Act imposes duties on suspicious transaction monitoring. The NGO Act places fiduciary responsibility squarely on directors and senior management. When donor funds are involved, contractual obligations tighten the noose. Breach is not academic; it is actionable. Once a red flag is raised, preservation of evidence becomes a legal duty, not an IT chore. This is where many NGOs step on the rake. Evidence is perishable. Emails auto-delete. WhatsApp chats vanish with phone upgrades. Cloud logs roll over. The moment suspicion arises, litigation readiness matters. In the language of digital forensics, made practical by forensic experts like yours truly, you must preserve, collect, and review defensibly. Delay is spoliation. Spoliation turns a manageable investigation into an adverse inference. Courts do not reward organisations that meant well. Here is the reality of how the money moved in the cases I see most. A program manager fronts field expenses in cash because the area is remote. Reimbursements follow on mobile money, approved by a supervisor who shares an inbox with finance. Receipts are photographed, not originals. The vendor is a cousin’s trading name. Splits keep transactions under approval thresholds. Month-end journals “true up” variances. The audit trail looks tidy. The substance is rotten. Technology did not cause this. Weak design did. Winning NGOs design controls around behaviour, not policy binders. Start with segregation that survives staff shortages. No single person should initiate, approve, and reconcile ever. Where teams are lean, use system controls, not trust. Maker-checker enforced by software. Time-stamped approvals tied to individual credentials. Two-factor authentication for finance platforms. Read-only donor dashboards. These are not luxuries, but they are hygiene. Data analytics is not a buzzword here; it is a flashlight. Simple tests catch most schemes: round-amount analysis, weekend and after-hours payments, duplicate vendors with shared phone numbers, split transactions just below limits, reimbursements without GPS-consistent field reports. These are not PhD tools; they are a discipline. They are simple tests that an experienced eye focuses on. Culture matters, but not in the way posters suggest. Staff copy what leaders tolerate. If executives override controls to move faster, you have trained the organisation to cheat politely. Tone at the top is not a speech. It is whether exceptions are documented, reviewed, and rare. When suspicion surfaces, the sequence matters. Freeze changes, preserve data, appoint independent counsel, define scope, and interview quietly. Do not broadcast, threaten, or promise outcomes. Keep a clean chain of custody. Remember: once you investigate, you own the findings. If you bury them, they will resurface, usually in a donor report written by someone else. A metaphor I use with boards is the water meter. You do not wait for drought to check consumption. You watch flow daily. Spikes tell stories. Silence tells lies. Donor funds are no different. Visibility beats virtue. Boards should ask only three questions, repeatedly. Where does the money enter? Who touches it, and when? How do we know: today, not at year-end, that controls worked? If management cannot answer without adjectives, you have a problem. The hardest truth for NGOs is that intent does not mitigate risk. Impact does not excuse weak controls. Courts and donors care about evidence. The ball is in your court to design systems that make wrongdoing hard, detection fast, and explanations boring. Trust is currency. Spend it on controls. Earn it with proof. Copyright Institute of Forensics & ICT Security, 2026. All rights reserved.
Click, Pay, Wait…Never receive: The rise of online retail scams
It starts with a screenshot. A neat Instagram shopfront with clean product photos. Delivery in 24 hours. A WhatsApp number with a Kampala profile picture. You pay by mobile money because the seller says the card machine is down, and the discount is today only. They reply fast until the payment hits. Then the chat slows. Tomorrow becomes next week, next week becomes silence. When you call, the line is off. When you check the page, it has a new name, a new logo, and the same products. Uganda Police has been blunt about this pattern: non-delivery of paid goods and services found online, where the items are never received. This is not bad customer service; it is a fraudulent design. You, the customer, are your best cybersecurity first line of defence. But you need basic cyberhygiene to escape these online fraud schemes. The scheme is small, fast, and built for low reporting. It rarely needs hacking; it needs your impatience. Minute case 1: The moving shop on social media The scammer runs short-lived pages. They boost posts for a few days, harvest payments, then rebrand and repeat. The trick is not the product; the trick is the identity. Many victims focus on where is my item? The investigator focuses on who received the money, and where the money go next. The key evidence is boring but decisive: the transaction reference, recipient number, chat logs, delivery promises, and the page metadata. If you lose those, you weaken your case. Minute case 2: The split-payment trap You are told to pay a deposit first, then balance on delivery, then fuel top-up, then “packaging. Each micro-payment is designed to feel recoverable. Together, they become a meaningful loss. This is behavioural fraud: small commitments stacked into obedience. Minute case 3: The false courier confirmation A fake rider calls you and reads your own details back to you to sound legitimate. They claim the package is at the stage, but you must pay for clearance. In reality, there is no rider but a second phone controlled by the same person or a collaborator. Technology makes this easy. Caller ID can be spoofed. NITA-U warns consumers not to trust the caller ID at face value because it can be faked. Minute case 4: The OTP harvest disguised as delivery The seller says you must confirm using an OTP to release the parcel. That OTP is actually for your wallet, your bank app, or your payment account. Uganda’s regulator community has repeatedly warned the public not to share PINs or verification codes. This is the point where online retail fraud blends into account takeover. The legal position: you are your best cybersecurity defender. Many victims want the police to recover the money quickly. That is not how criminal law works. Uganda Police has stressed that money-related offences such as obtaining by false pretences and theft are reportable crimes, but the police is not a debt collection agency. Your complaint must be framed as an offence, backed by evidence, not as a customer service dispute. Two Ugandan statutes matter immediately. Under the Computer Misuse Act, electronic fraud is an offence with significant penalties on conviction, and the Act also addresses conduct such as unauthorised disclosure of access codes. Under the Electronic Transactions Act, the law sets consumer-protection expectations for e-commerce, including requirements around information presented to consumers and remedies such as cancellation and refund in defined circumstances. The practical reality is that your strongest leverage is not moral outrage. It is your evidence package. Evidence: treat your phone like a crime scene. Most victims destroy their own case while trying to talk sense into the scammer. If you want a real chance at action, do what litigators and e-discovery experts insist on: preserve first, then pursue. Mr Strategy of Summit Consulting Ltd’s work is focused on the preservation duty mindset: prompt, proper preservation steps matter, and a preservation letter to the other side is different from an internal legal hold notice. That means you stop negotiating in circles and you lock down proof: Keep the full chat history, do not delete messages, export the chat if possible, screenshot the page, the posts, the promises, and the comments. Include dates and handles. Record the payment details: transaction ID, recipient number, names displayed, and timestamps. Capture delivery claims: dispatch note, tracking, rider number. If you clicked a link, keep the link. Do not clean up your browser history. Then report through credible channels. NITA-U points consumers to a reporting channel for online transaction scams. If the scam used a bank or telecom rail, notify the provider immediately. Timing matters because fraud proceeds move fast. Why is this rising now? Online retail scams thrive when three conditions exist: low-friction payments, weak identity assurance, and social media trust shortcuts. Uganda has fast payment rails and high social commerce usage. That is a growth engine and a crime engine in the same breath. Media reporting has also highlighted the broader rise of online scams and their reputational impact. Regulators have responded in adjacent areas, too. The Financial Intelligence Authority has issued public warnings about fraudulent websites that impersonate institutions and solicit payments. The pattern is the same: false legitimacy plus payment pressure. What changes the game? Boards and executives often misread online retail fraud as a consumer problem. It is now a brand risk and a payment-ecosystem problem. The winners will build proof into the transaction. If you run a legitimate online retail operation, the control standard is rising. The Uganda Communications Commission has publicly cautioned e-commerce firms in the consumer context, signalling that scrutiny is not theoretical. Operationally, serious platforms will do four things. They will reduce pay-first to strangers’ risk by using escrow-like settlement or payment-on-delivery with verified riders and verifiable tracking. They will bind identity to transaction using stronger KYC signals, device reputation, and consistent merchant identifiers, not just a phone number and a logo. They will retain logs: order creation, edits, messages, IP/device data where lawful, and
Your phone, their ATM: the dark side of mobile money convenience
Money left a corporate account without malware, without a breached firewall. It moved because the system accepted it. The phone in an employee’s hand behaved like an ATM that never sleeps. Between 2015 and 2018, more than UGX 2.2 billion was drained through a bank’s online platform. Transactions were initiated, verified, and approved inside the customer’s own environment. Some instructions carried mismatched account names and numbers. The bank processed them, but the customer did not catch them in time. When the case reached court, the question was not whether fraud occurred; it was who had the last clear chance to stop it. That question put the ball squarely in both courts. How this kind of fraud actually works An employee logs in from a known device. No alert. A payment file is uploaded, the account number is correct, and the account name is close enough to pass a human glance. The same employee verifies the transaction. Segregation exists on paper, not in practice. Approval is granted, and the platform allows it because the roles were never properly split. A few minutes later, funds leave the account. Proceeds are split into smaller transfers; some to bank accounts, some to mobile money, and some to intermediaries. The trail cools, no hacking, no brute force. Just speed, familiarity, and silence. This is why mobile money and online banking fraud scale. It does not attack systems. It uses permissions. The case that comes to mind… In Abacus Parenteral Drugs Ltd v. Stanbic Bank (U) Ltd, decided on 9 April 2025, the High Court of Uganda refined a principle first set out in Aida Atiku v. Centenary Rural Development Bank Ltd. The Court said something many practitioners know but few contracts admit: fraud prevention in digital banking is shared. Not abstractly. Practically and contractually. The bank argued its platform was customer-controlled. Initiation, verification, and approval sat with the client. The customer argued the bank breached its own agreement by processing instructions with obvious discrepancies, including mismatched account details. The Court agreed with both partly. What the Court actually did First, it confirmed the relationship is contractual. Every clause matters. If the bank agrees to reject erroneous instructions, that duty is enforceable. If the customer agrees to segregate duties and protect credentials, that duty is also enforceable. Second, it looked at control. Who was best positioned, at each point, to stop the fraud? The customer failed to maintain basic internal controls. One officer could initiate and approve, but account activity was not monitored, and security protocols were breached. On that basis, the customer carried the larger share of blame. But the bank was not excused. Processing transactions where the account name did not match the account number was a red flag. Failing to act on that was a breach of duty. The result was apportionment. Of the UGX 1,698,000,000 proven loss, the bank paid 20 percent. The customer carried 80 percent. That split matters. It signals how courts will think going forward. The legal signal regulators and bankers should not miss This decision extends the Atiku principle to corporate clients. Liability follows comparative negligence. Courts will ask a simple, uncomfortable question: who had the last clear chance to prevent the loss? Limitation of liability clauses will not save you if they are vague. Ambiguity cuts against the drafter. If a clause does not clearly describe what is excluded and why, expect it to fail. Most importantly, authorization is not the same as safety. A transaction can be valid in form and defective in substance. When banks ignore obvious discrepancies, the ball comes back into their court. Technology, without romance Banks like to say platforms are customer-controlled. That is only half true. Banks design the rails. They set tolerance levels, decide whether name-number mismatches hard-stop or soft-pass, and choose whether overrides trigger alerts or logs that no one reads. Customers, on the other hand, control access, who has tokens, who approves, and who can act alone. When segregation collapses, the system will not rescue you. In this case, technology did exactly what it was configured to do. That is the problem. What corporate clients must change immediately? One person must never initiate and approve. Not because policy says so, but because courts now expect it. Account reviews must be daily, not monthly. If you cannot explain a transaction within 24 hours, you do not control it. Credentials are not administrative details but legal liabilities. When an employee acts with your access, the law treats it as your act unless you can prove otherwise. If you do not know what your online banking agreement requires of you, assume it requires more than you are doing. What banks must stop pretending Fraud detection is not optional support; it is a contractual duty once promised. Name-number mismatches are not clerical issues. They are warning signs. Platforms that rely entirely on customer discipline will fail in court if obvious red flags are ignored. When banks have data that customers do not, silence becomes negligence. Your phone can function like an ATM for you, or for someone who understands your routines better than you do. Digital convenience has shifted risk, not removed it. Courts are responding by reallocating responsibility to whoever could have acted sooner. In this landscape, fraud prevention is not a slogan. It is evidence. Logs, controls, and decisions made in time. If fraud passes through your system, the law will ask where the ball was and why you did not pick it up. Copyright Summit Consulting Ltd 2026, All rights reserved.
The money moved, the system agreed, and the responsibility became ours
On a Thursday afternoon, the liquidity report was clean. By Monday morning, UGX 4.7 billion had moved out of the institution without triggering a single alert, no external breach trails, and no malware. Just approved transactions that looked normal because the people approving them were trusted. That is the moment the case came to my attention. I was the investigator assigned to answer one question: did this money move within policy, or did policy simply fail to see it? The mechanics, minute by minute At 10:12 a.m., a temporary limit increase was applied to a dormant corporate account. The request cited “urgent supplier settlement.” The approver was authorized, and the reason field was vague but acceptable. At 10:18 a.m., the account received three inward transfers from internal suspense accounts. Each amount was below the threshold that requires second-level review. At 10:27 a.m., the funds were split. Some went to mobile money, some to two newly onboarded accounts, and one went to a cooperative SACCO with a clean history. At 11:03 a.m., the temporary limit was reversed. By lunch, the system showed nothing unusual; the trail was cold by the close of business. Modern fraud does not fight controls. It walks between them. Why did the controls not stop it The institution had policies; Strong ones, Credit policy. Transaction approval matrix. KYC procedures aligned with regulations. On paper, it was solid. In practice, three things broke. First, trust had replaced verification. Senior staff overrides were rarely challenged. The system logged them, but no one reviewed the logs daily. Second, speed had become a performance metric. Staff were rewarded for turnaround time, not for clean documentation. When speed wins, evidence loses. Third, technology was treated as neutral. It is not. Every system has blind spots. Fraudsters study those blind spots better than most IT teams. The legal reality From a legal standpoint, this case hinged on intent and duty. The transactions were authorized, which meant criminal liability was not automatic. To prosecute, we had to prove conspiracy, abuse of office, and intent to defraud under financial crimes statutes. That required evidence beyond numbers. Emails. Chat logs. Patterns of behavior. The timing showed coordination. Until that threshold is met, the law is clear. The institution carries the loss, and the ball stays in your court until you can prove otherwise. This is where many cases die. Not because fraud did not happen, but because evidence was collected too late or handled poorly. The technology angle, stripped of hype When it comes to investigations, evidence is everything. Preservation of evidence is critical, else, everything else collapses. We did not use artificial intelligence. We used discipline. We pulled raw transaction logs, not reports. We rebuilt the sequence manually. We mapped user IDs to physical terminals. We compared working hours to transaction timestamps. One detail mattered: the same approvals happened when one specific supervisor was on duty, even when different staff appeared to be involved. That told us where to look. We also reviewed system access reviews. Two users had retained privileges they no longer needed after role changes. That gap alone breached internal policy and strengthened the case. Technology does not catch fraud. People who understand how systems behave do. The human layer One suspect was not greedy. He was cornered. Medical bills, school fees, and a loan denied by the same institution he worked for. Another was opportunistic. He saw the gap and monetized it. This matters because prevention is not just about blocking bad actors; it is about removing conditions that make bad decisions easier. Rotate staff, enforce leave, and review overrides daily. These are not HR rituals. They are control mechanisms. What regulators should take from this Do not ask for frameworks. Ask for evidence of use, request override logs for random weeks, ask who reviewed them and when, and demand proof of follow-up. When you rely on annual reports, you regulate history. Fraud happens in real time. Also, fix accountability. When losses occur, responsibility should not stop at the teller or officer. Senior management decisions create the environment where fraud either survives or fails. If you are a banker, this is what you should do tomorrow morning Stop waiting for perfect systems. Start with habits. Review exceptions daily, separate speed from reward, document intent, not just approval, protect staff who raise concerns early, and understand this: if you cannot explain a transaction clearly to a prosecutor, you do not control it. Why this work matters When fraud happens, money moves first. Trust follows slowly, if at all. In Uganda, every failure in a financial institution pushes people back into cash, into informality, into risk. That cost never appears on the balance sheet, but it is real. This case ended with partial recovery, disciplinary action, and one criminal file ready for court. It was not perfect. It was sufficient. Fraud risk is not about heroics; it is about seeing clearly, acting early, and knowing when the law says the ball is in your court. That is the job. Copyright IFIS and Summit Consulting forensics team, 2026. All rights reserved.
Why familiar faces in an organization often hide the biggest risks
A cybersecurity case leaders rarely want to hear The breach did not come through the firewall. It came through a smile. On paper, the organisation was doing everything right. Firewalls patched. Antivirus updated, and external penetration tests passed. The board slept well, believing the risk lived “outside”; hackers in hoodies, foreign IP addresses, dark web threats. The reality was closer. Much closer. The breach began with a familiar face. A long-serving systems administrator. Ten years in. Trusted and dependable. Always available when systems went down at night. The kind of person whose access requests were approved without hesitation. That is how most serious cyber incidents begin. Familiarity is not trust. It is exposure. In cybersecurity, the biggest threat is not malicious intent. It is unquestioned access. The administrator did not set out to steal data. That is important. He was under pressure: personal debt, school fees, side hustles. He reused credentials across systems “to save time.” He disabled certain logs because “they slowed the system.” He shared admin passwords informally with a colleague during a crisis and never rotated them back. None of this triggered alarms. Why would it? He was one of “us.” Then a phishing email landed in his inbox. Not dramatic. Not sophisticated. It referenced an internal system upgrade and used language copied from previous internal emails. He clicked. The attacker did not need to break in. They were invited. Within hours, lateral movement had begun. Privileged access meant the attacker could see everything: user directories, financial systems, and backups. The breach went undetected for weeks because the activity looked normal. It was executed using a legitimate account, at normal hours, by someone who had always been there. This is the part boards struggle with: cybersecurity fails socially before it fails technically. Why familiar faces are the hardest risks to see First, they blend into noise. Security teams are trained to look for anomalies. Familiar users generate none. Their behaviour becomes the baseline. Second, leaders override controls for them. “He needs quick access.” “She has been here for years.” Temporary exceptions accumulate. Cyber risk compounds quietly through kindness. Third, reporting lines blur. When someone is both critical to operations and deeply trusted, no one wants to challenge them. Reviews become ceremonial. Access recertification becomes box-ticking. I have seen this pattern across banks, universities, and government agencies. The longest-serving staff often carry the widest, least-reviewed access. Not because they are bad, but because no one ever went back to redesign the system around growth. The red flags that were missed In this case, the signs were there. System logs were thinner than expected. Privileged access had never been reduced despite role changes. Backups were accessible from production accounts. Alerts were configured, but no one reviewed them regularly. Most tellingly, cybersecurity was still framed as an IT issue. The board asked about tools, not behaviours. Budgets were approved for software, not for access governance or insider threat modelling. The breach was discovered only after customer data appeared on a forum. By then, the question was no longer “how did this happen?” but “why didn’t we see it coming?” The lesson for leaders Cybersecurity maturity is measured by how you manage the people you trust most. If your most familiar faces have never had their access challenged, you are exposed. If your cybersecurity dashboards never discuss insider risk, you are blind. If your board is comfortable, your organisation is not safe. This is not an argument for suspicion. It is an argument for discipline. Trust should trigger stronger controls, not weaker ones. What boards and executives must do differently? Reframe cybersecurity as an organisational risk, not a technical one. Demand regular privileged access reviews led independently of IT. Rotate credentials ruthlessly. Separate operational heroism from control design. Most importantly, ask one contrarian question at the board level: “If I wanted to steal from us quietly, who would I need to be?” The answer is rarely a stranger. Cybersecurity does not fail because leaders do not care. It fails because they care selectively. And familiarity is the most dangerous selection bias of all.
The night the numbers stopped making sense
It happened quietly, and that is how most frauds begin in Uganda; not with alarms, but with silence. On a Tuesday evening in late August, long after the office had emptied and boda bodas thinned on the road, a junior accountant stayed behind at a mid-sized institution on the outskirts of Kampala. The reason sounded noble: month-end reconciliations. The reality was darker. By the time the security guard locked the gate, UGX 1.48 billion had already begun its journey out of the organization. No gun, no hacking, no drama, just a series of approvals that looked routine, numbers that felt familiar, and people who trusted each other a little too much. This was not a failure of intelligence. It was a failure of discipline. I know this pattern well. I have walked into too many boardrooms where leaders say, “We trusted our people,” as if trust were a control. It is not. Trust is a sentiment. Controls are systems. The internal conflict that opened the door Every serious fraud starts with tension, not greed. Suspect 1 was competent, respected, and exhausted. A mid-career professional, sharp suit, soft-spoken, known for “saving the day” during audits. But for two years, his promotion had stalled. New managers arrived. Younger. Louder. Less experienced. Suspect 2 was different. Charismatic. A fixer. He knew everyone, from mobile money agents to suppliers to the cashier at the bank branch. He thrived in the informal spaces where rules bend. Their conflict was not personal. It was structural. The organisation had grown, but its controls had not. Roles overlapped. Segregation of duties existed on paper, not in practice—management prized speed over process. “Just make it work” had become the unofficial strategy. That is where fraud feeds. How the scheme was engineered This was not sophisticated. That is what makes it dangerous. The scheme had three moving parts. First, dormant supplier accounts. Over the years, dozens of suppliers had been onboarded for projects that no longer existed. Their profiles were never deactivated. Bank details sat quietly in the system, untouched but alive. Second, manual overrides. The finance system allowed senior staff to bypass certain approval thresholds “temporarily.” Temporary, in Uganda, often means forever. Third, mobile money as the bridge. Instead of moving funds directly to personal accounts, which would raise flags, money was routed through supplier accounts, then broken down into smaller mobile money transfers; UGX 20 million here, UGX 15 million there. Familiar amounts. Normal-looking flows. Suspect 1 processed the entries. Suspect 2 handled the distribution. A third, never formally identified, provided cover by delaying reconciliations. No single transaction looked suspicious. That is the genius of bad systems. The money trail no one wanted to follow When Summit Consulting Ltd was called in, the brief was simple: “Just help us confirm the numbers.” That sentence always worries me. We started where most internal teams avoid: timing. Not amounts. Timing. Why were supplier payments peaking on Fridays? Why did mobile money transfers spike between 6:30 pm and 8:00 pm? Why are reconciliations always postponed to “next week”? We mapped three months of transactions against staff attendance, system logs, and mobile money statements. Patterns emerged quickly. Supplier A received UGX 320 million over six weeks, for services last rendered three years ago. Supplier B’s bank account showed immediate cash withdrawals, followed by mobile money deposits to numbers registered under different names but sharing the same national ID photo. This is Uganda. Paper lies. Patterns don’t. The red flags the auditor finally trusted The turning point came from an auditor who almost ignored his instinct. He noticed something small: rounding. Payments consistently ended in double zeros. Real operational payments are messy. They include odd figures, fuel adjustments, tax differences, and decimals. Fraud likes clean numbers. Then came the lifestyle lag. No flashy cars. No mansions. Instead, school fees paid in cash. Loans settled early. Quiet generosity. Fraudsters in Uganda often hide by being modest. Most importantly, there was fear. Staff avoided certain questions. Files went missing, meetings were postponed, silence thickened, and fear is the loudest red flag. How internal controls were bypassed Let us be clear. The controls did not fail. They were never real. Segregation of duties existed, but the same people covered for each other during “busy periods.” User access reviews were performed annually, long after damage was done. The board received dashboards, not discomfort. No one asked the most important question: “Show me how this could be abused.” I say this from experience. Boards prefer assurance. Fraud thrives on reassurance. The moment the case cracked The case broke not through technology, but through conversation. A junior staff member, quiet and observant, mentioned casually that Suspect 2 always knew when funds would hit certain accounts, even before system notifications went out. That is insider knowledge. That is coordination. We cross-checked call logs against transaction timestamps. Correlation was near-perfect. At that point, the numbers no longer argued. They confessed. The cost, counted honestly The total loss stood at UGX 1,482,600,000. Recoverable? Partially. Some funds had been converted to cash. Some invested informally. Some gone. But the real loss was trust between staff, management, and the board. And trust, once broken, costs more to rebuild than any balance sheet can show. What leaders must confront? Fraud is rarely about bad people. It is about lazy systems, unclear accountability, and leaders who confuse activity with control. If your best defence is “we trust our people,” you are already exposed. If reconciliations can wait, so can fraud detection. If no one is uncomfortable in your boardroom, someone is comfortable stealing. I have carried coffins in Munteme village. I have watched savings groups collapse because no one wanted to ask hard questions. The lesson is the same at every level: darkness is not evil. It is merely unattended. Fraud does not announce itself. It waits for permission. And permission is often silent.