Money left a corporate account without malware, without a breached firewall. It moved because the system accepted it. The phone in an employee’s hand behaved like an ATM that never sleeps. Between 2015 and 2018, more than UGX 2.2 billion was drained through a bank’s online platform. Transactions were initiated, verified, and approved inside the customer’s own environment. Some instructions carried mismatched account names and numbers. The bank processed them, but the customer did not catch them in time. When the case reached court, the question was not whether fraud occurred; it was who had the last clear chance to stop it. That question put the ball squarely in both courts. How this kind of fraud actually works An employee logs in from a known device. No alert. A payment file is uploaded, the account number is correct, and the account name is close enough to pass a human glance. The same employee verifies the transaction. Segregation exists on paper, not in practice. Approval is granted, and the platform allows it because the roles were never properly split. A few minutes later, funds leave the account. Proceeds are split into smaller transfers; some to bank accounts, some to mobile money, and some to intermediaries. The trail cools, no hacking, no brute force. Just speed, familiarity, and silence. This is why mobile money and online banking fraud scale. It does not attack systems. It uses permissions. The case that comes to mind… In Abacus Parenteral Drugs Ltd v. Stanbic Bank (U) Ltd, decided on 9 April 2025, the High Court of Uganda refined a principle first set out in Aida Atiku v. Centenary Rural Development Bank Ltd. The Court said something many practitioners know but few contracts admit: fraud prevention in digital banking is shared. Not abstractly. Practically and contractually. The bank argued its platform was customer-controlled. Initiation, verification, and approval sat with the client. The customer argued the bank breached its own agreement by processing instructions with obvious discrepancies, including mismatched account details. The Court agreed with both partly. What the Court actually did First, it confirmed the relationship is contractual. Every clause matters. If the bank agrees to reject erroneous instructions, that duty is enforceable. If the customer agrees to segregate duties and protect credentials, that duty is also enforceable. Second, it looked at control. Who was best positioned, at each point, to stop the fraud? The customer failed to maintain basic internal controls. One officer could initiate and approve, but account activity was not monitored, and security protocols were breached. On that basis, the customer carried the larger share of blame. But the bank was not excused. Processing transactions where the account name did not match the account number was a red flag. Failing to act on that was a breach of duty. The result was apportionment. Of the UGX 1,698,000,000 proven loss, the bank paid 20 percent. The customer carried 80 percent. That split matters. It signals how courts will think going forward. The legal signal regulators and bankers should not miss This decision extends the Atiku principle to corporate clients. Liability follows comparative negligence. Courts will ask a simple, uncomfortable question: who had the last clear chance to prevent the loss? Limitation of liability clauses will not save you if they are vague. Ambiguity cuts against the drafter. If a clause does not clearly describe what is excluded and why, expect it to fail. Most importantly, authorization is not the same as safety. A transaction can be valid in form and defective in substance. When banks ignore obvious discrepancies, the ball comes back into their court. Technology, without romance Banks like to say platforms are customer-controlled. That is only half true. Banks design the rails. They set tolerance levels, decide whether name-number mismatches hard-stop or soft-pass, and choose whether overrides trigger alerts or logs that no one reads. Customers, on the other hand, control access, who has tokens, who approves, and who can act alone. When segregation collapses, the system will not rescue you. In this case, technology did exactly what it was configured to do. That is the problem. What corporate clients must change immediately? One person must never initiate and approve. Not because policy says so, but because courts now expect it. Account reviews must be daily, not monthly. If you cannot explain a transaction within 24 hours, you do not control it. Credentials are not administrative details but legal liabilities. When an employee acts with your access, the law treats it as your act unless you can prove otherwise. If you do not know what your online banking agreement requires of you, assume it requires more than you are doing. What banks must stop pretending Fraud detection is not optional support; it is a contractual duty once promised. Name-number mismatches are not clerical issues. They are warning signs. Platforms that rely entirely on customer discipline will fail in court if obvious red flags are ignored. When banks have data that customers do not, silence becomes negligence. Your phone can function like an ATM for you, or for someone who understands your routines better than you do. Digital convenience has shifted risk, not removed it. Courts are responding by reallocating responsibility to whoever could have acted sooner. In this landscape, fraud prevention is not a slogan. It is evidence. Logs, controls, and decisions made in time. If fraud passes through your system, the law will ask where the ball was and why you did not pick it up. Copyright Summit Consulting Ltd 2026, All rights reserved.
The money moved, the system agreed, and the responsibility became ours
On a Thursday afternoon, the liquidity report was clean. By Monday morning, UGX 4.7 billion had moved out of the institution without triggering a single alert, no external breach trails, and no malware. Just approved transactions that looked normal because the people approving them were trusted. That is the moment the case came to my attention. I was the investigator assigned to answer one question: did this money move within policy, or did policy simply fail to see it? The mechanics, minute by minute At 10:12 a.m., a temporary limit increase was applied to a dormant corporate account. The request cited “urgent supplier settlement.” The approver was authorized, and the reason field was vague but acceptable. At 10:18 a.m., the account received three inward transfers from internal suspense accounts. Each amount was below the threshold that requires second-level review. At 10:27 a.m., the funds were split. Some went to mobile money, some to two newly onboarded accounts, and one went to a cooperative SACCO with a clean history. At 11:03 a.m., the temporary limit was reversed. By lunch, the system showed nothing unusual; the trail was cold by the close of business. Modern fraud does not fight controls. It walks between them. Why did the controls not stop it The institution had policies; Strong ones, Credit policy. Transaction approval matrix. KYC procedures aligned with regulations. On paper, it was solid. In practice, three things broke. First, trust had replaced verification. Senior staff overrides were rarely challenged. The system logged them, but no one reviewed the logs daily. Second, speed had become a performance metric. Staff were rewarded for turnaround time, not for clean documentation. When speed wins, evidence loses. Third, technology was treated as neutral. It is not. Every system has blind spots. Fraudsters study those blind spots better than most IT teams. The legal reality From a legal standpoint, this case hinged on intent and duty. The transactions were authorized, which meant criminal liability was not automatic. To prosecute, we had to prove conspiracy, abuse of office, and intent to defraud under financial crimes statutes. That required evidence beyond numbers. Emails. Chat logs. Patterns of behavior. The timing showed coordination. Until that threshold is met, the law is clear. The institution carries the loss, and the ball stays in your court until you can prove otherwise. This is where many cases die. Not because fraud did not happen, but because evidence was collected too late or handled poorly. The technology angle, stripped of hype When it comes to investigations, evidence is everything. Preservation of evidence is critical, else, everything else collapses. We did not use artificial intelligence. We used discipline. We pulled raw transaction logs, not reports. We rebuilt the sequence manually. We mapped user IDs to physical terminals. We compared working hours to transaction timestamps. One detail mattered: the same approvals happened when one specific supervisor was on duty, even when different staff appeared to be involved. That told us where to look. We also reviewed system access reviews. Two users had retained privileges they no longer needed after role changes. That gap alone breached internal policy and strengthened the case. Technology does not catch fraud. People who understand how systems behave do. The human layer One suspect was not greedy. He was cornered. Medical bills, school fees, and a loan denied by the same institution he worked for. Another was opportunistic. He saw the gap and monetized it. This matters because prevention is not just about blocking bad actors; it is about removing conditions that make bad decisions easier. Rotate staff, enforce leave, and review overrides daily. These are not HR rituals. They are control mechanisms. What regulators should take from this Do not ask for frameworks. Ask for evidence of use, request override logs for random weeks, ask who reviewed them and when, and demand proof of follow-up. When you rely on annual reports, you regulate history. Fraud happens in real time. Also, fix accountability. When losses occur, responsibility should not stop at the teller or officer. Senior management decisions create the environment where fraud either survives or fails. If you are a banker, this is what you should do tomorrow morning Stop waiting for perfect systems. Start with habits. Review exceptions daily, separate speed from reward, document intent, not just approval, protect staff who raise concerns early, and understand this: if you cannot explain a transaction clearly to a prosecutor, you do not control it. Why this work matters When fraud happens, money moves first. Trust follows slowly, if at all. In Uganda, every failure in a financial institution pushes people back into cash, into informality, into risk. That cost never appears on the balance sheet, but it is real. This case ended with partial recovery, disciplinary action, and one criminal file ready for court. It was not perfect. It was sufficient. Fraud risk is not about heroics; it is about seeing clearly, acting early, and knowing when the law says the ball is in your court. That is the job. Copyright IFIS and Summit Consulting forensics team, 2026. All rights reserved.
Why familiar faces in an organization often hide the biggest risks
A cybersecurity case leaders rarely want to hear The breach did not come through the firewall. It came through a smile. On paper, the organisation was doing everything right. Firewalls patched. Antivirus updated, and external penetration tests passed. The board slept well, believing the risk lived “outside”; hackers in hoodies, foreign IP addresses, dark web threats. The reality was closer. Much closer. The breach began with a familiar face. A long-serving systems administrator. Ten years in. Trusted and dependable. Always available when systems went down at night. The kind of person whose access requests were approved without hesitation. That is how most serious cyber incidents begin. Familiarity is not trust. It is exposure. In cybersecurity, the biggest threat is not malicious intent. It is unquestioned access. The administrator did not set out to steal data. That is important. He was under pressure: personal debt, school fees, side hustles. He reused credentials across systems “to save time.” He disabled certain logs because “they slowed the system.” He shared admin passwords informally with a colleague during a crisis and never rotated them back. None of this triggered alarms. Why would it? He was one of “us.” Then a phishing email landed in his inbox. Not dramatic. Not sophisticated. It referenced an internal system upgrade and used language copied from previous internal emails. He clicked. The attacker did not need to break in. They were invited. Within hours, lateral movement had begun. Privileged access meant the attacker could see everything: user directories, financial systems, and backups. The breach went undetected for weeks because the activity looked normal. It was executed using a legitimate account, at normal hours, by someone who had always been there. This is the part boards struggle with: cybersecurity fails socially before it fails technically. Why familiar faces are the hardest risks to see First, they blend into noise. Security teams are trained to look for anomalies. Familiar users generate none. Their behaviour becomes the baseline. Second, leaders override controls for them. “He needs quick access.” “She has been here for years.” Temporary exceptions accumulate. Cyber risk compounds quietly through kindness. Third, reporting lines blur. When someone is both critical to operations and deeply trusted, no one wants to challenge them. Reviews become ceremonial. Access recertification becomes box-ticking. I have seen this pattern across banks, universities, and government agencies. The longest-serving staff often carry the widest, least-reviewed access. Not because they are bad, but because no one ever went back to redesign the system around growth. The red flags that were missed In this case, the signs were there. System logs were thinner than expected. Privileged access had never been reduced despite role changes. Backups were accessible from production accounts. Alerts were configured, but no one reviewed them regularly. Most tellingly, cybersecurity was still framed as an IT issue. The board asked about tools, not behaviours. Budgets were approved for software, not for access governance or insider threat modelling. The breach was discovered only after customer data appeared on a forum. By then, the question was no longer “how did this happen?” but “why didn’t we see it coming?” The lesson for leaders Cybersecurity maturity is measured by how you manage the people you trust most. If your most familiar faces have never had their access challenged, you are exposed. If your cybersecurity dashboards never discuss insider risk, you are blind. If your board is comfortable, your organisation is not safe. This is not an argument for suspicion. It is an argument for discipline. Trust should trigger stronger controls, not weaker ones. What boards and executives must do differently? Reframe cybersecurity as an organisational risk, not a technical one. Demand regular privileged access reviews led independently of IT. Rotate credentials ruthlessly. Separate operational heroism from control design. Most importantly, ask one contrarian question at the board level: “If I wanted to steal from us quietly, who would I need to be?” The answer is rarely a stranger. Cybersecurity does not fail because leaders do not care. It fails because they care selectively. And familiarity is the most dangerous selection bias of all.
The night the numbers stopped making sense
It happened quietly, and that is how most frauds begin in Uganda; not with alarms, but with silence. On a Tuesday evening in late August, long after the office had emptied and boda bodas thinned on the road, a junior accountant stayed behind at a mid-sized institution on the outskirts of Kampala. The reason sounded noble: month-end reconciliations. The reality was darker. By the time the security guard locked the gate, UGX 1.48 billion had already begun its journey out of the organization. No gun, no hacking, no drama, just a series of approvals that looked routine, numbers that felt familiar, and people who trusted each other a little too much. This was not a failure of intelligence. It was a failure of discipline. I know this pattern well. I have walked into too many boardrooms where leaders say, “We trusted our people,” as if trust were a control. It is not. Trust is a sentiment. Controls are systems. The internal conflict that opened the door Every serious fraud starts with tension, not greed. Suspect 1 was competent, respected, and exhausted. A mid-career professional, sharp suit, soft-spoken, known for “saving the day” during audits. But for two years, his promotion had stalled. New managers arrived. Younger. Louder. Less experienced. Suspect 2 was different. Charismatic. A fixer. He knew everyone, from mobile money agents to suppliers to the cashier at the bank branch. He thrived in the informal spaces where rules bend. Their conflict was not personal. It was structural. The organisation had grown, but its controls had not. Roles overlapped. Segregation of duties existed on paper, not in practice—management prized speed over process. “Just make it work” had become the unofficial strategy. That is where fraud feeds. How the scheme was engineered This was not sophisticated. That is what makes it dangerous. The scheme had three moving parts. First, dormant supplier accounts. Over the years, dozens of suppliers had been onboarded for projects that no longer existed. Their profiles were never deactivated. Bank details sat quietly in the system, untouched but alive. Second, manual overrides. The finance system allowed senior staff to bypass certain approval thresholds “temporarily.” Temporary, in Uganda, often means forever. Third, mobile money as the bridge. Instead of moving funds directly to personal accounts, which would raise flags, money was routed through supplier accounts, then broken down into smaller mobile money transfers; UGX 20 million here, UGX 15 million there. Familiar amounts. Normal-looking flows. Suspect 1 processed the entries. Suspect 2 handled the distribution. A third, never formally identified, provided cover by delaying reconciliations. No single transaction looked suspicious. That is the genius of bad systems. The money trail no one wanted to follow When Summit Consulting Ltd was called in, the brief was simple: “Just help us confirm the numbers.” That sentence always worries me. We started where most internal teams avoid: timing. Not amounts. Timing. Why were supplier payments peaking on Fridays? Why did mobile money transfers spike between 6:30 pm and 8:00 pm? Why are reconciliations always postponed to “next week”? We mapped three months of transactions against staff attendance, system logs, and mobile money statements. Patterns emerged quickly. Supplier A received UGX 320 million over six weeks, for services last rendered three years ago. Supplier B’s bank account showed immediate cash withdrawals, followed by mobile money deposits to numbers registered under different names but sharing the same national ID photo. This is Uganda. Paper lies. Patterns don’t. The red flags the auditor finally trusted The turning point came from an auditor who almost ignored his instinct. He noticed something small: rounding. Payments consistently ended in double zeros. Real operational payments are messy. They include odd figures, fuel adjustments, tax differences, and decimals. Fraud likes clean numbers. Then came the lifestyle lag. No flashy cars. No mansions. Instead, school fees paid in cash. Loans settled early. Quiet generosity. Fraudsters in Uganda often hide by being modest. Most importantly, there was fear. Staff avoided certain questions. Files went missing, meetings were postponed, silence thickened, and fear is the loudest red flag. How internal controls were bypassed Let us be clear. The controls did not fail. They were never real. Segregation of duties existed, but the same people covered for each other during “busy periods.” User access reviews were performed annually, long after damage was done. The board received dashboards, not discomfort. No one asked the most important question: “Show me how this could be abused.” I say this from experience. Boards prefer assurance. Fraud thrives on reassurance. The moment the case cracked The case broke not through technology, but through conversation. A junior staff member, quiet and observant, mentioned casually that Suspect 2 always knew when funds would hit certain accounts, even before system notifications went out. That is insider knowledge. That is coordination. We cross-checked call logs against transaction timestamps. Correlation was near-perfect. At that point, the numbers no longer argued. They confessed. The cost, counted honestly The total loss stood at UGX 1,482,600,000. Recoverable? Partially. Some funds had been converted to cash. Some invested informally. Some gone. But the real loss was trust between staff, management, and the board. And trust, once broken, costs more to rebuild than any balance sheet can show. What leaders must confront? Fraud is rarely about bad people. It is about lazy systems, unclear accountability, and leaders who confuse activity with control. If your best defence is “we trust our people,” you are already exposed. If reconciliations can wait, so can fraud detection. If no one is uncomfortable in your boardroom, someone is comfortable stealing. I have carried coffins in Munteme village. I have watched savings groups collapse because no one wanted to ask hard questions. The lesson is the same at every level: darkness is not evil. It is merely unattended. Fraud does not announce itself. It waits for permission. And permission is often silent.
Season’s greetings from all of us at Summit Consulting Ltd, Institute of Forensics and Twezimbe.com
As the year draws to a close, we pause with gratitude. Thank you for the trust you placed in us, the conversations that mattered, and the courage to confront hard decisions together. In a world that rewards noise, you chose clarity. In moments that demanded comfort, you chose discipline. That is leadership, and it has been a privilege to walk alongside you. Christmas reminds us that progress is built quietly, through values, relationships, and long-term thinking. The New Year calls us to sharpen our judgment, strengthen execution, and lead with purpose in a more demanding world. From all of us at Summit Consulting, we wish you a peaceful Christmas and a bold, successful New Year filled with sound decisions, resilient systems, and enduring impact. Please note that our office will close on 22nd December 2025 and reopen on 5th January 2025. We shall leave the office closed and work from home for key roles. We look forward to continuing the journey with you in the year ahead. Warm regards, All of us at Summit Consulting 2025 in review at Summit Consulting Ltd 2025 was a defining year for Summit Consulting. A year of building, loss, resilience, and quiet progress that will shape the next decade of our work. We reached a major milestone with Twezimbe.com. After two years of disciplined effort, the platform is now live. You can now register and automate your benevolent fund or run a crowdfunding campaign for your Church or Mosque project. Have a club? You can automate it and easily collect membership fees. What started as a conviction is now a working system, built to formal standards, designed for scale, and grounded in real community and institutional needs. It is a reminder that serious products are forged through patience, not publicity. We also took governance to a higher level with Mela GRC. The platform replaced our old tool, actionTEAM GRC, and moved governance, risk, and execution from fragmented tools to one integrated operating system. Boards and executives can now see, decide, and act with clarity. This was not an upgrade. It was a reset of how governance should work in practice. Through IFIS, we strengthened the professional ecosystem. The 3rd Annual IFIS Conference was successfully delivered, convening practitioners, leaders, and regulators around real-world challenges in cybersecurity, investigations, and risk. The conversations were honest, practical, and forward-looking. 2025 was also marked by deep loss. We lost our Chief Operations Officer, Godfrey Ssenyonjo. A colleague, a professional, and a steady presence in our journey. His passing left a silence that cannot be filled. We continue to pray for his soul and for the young family he left behind. His contribution, character, and commitment remain part of Summit’s story. We end the year grateful, grounded, and clear-eyed. Stronger in systems. Deeper in purpose. More human in perspective. That is 2025 at Summit Consulting. Copyright Summit Consulting Ltd 2025. All rights reserved.
Turn screen time into skill time: Enroll your child in cybersecurity this holiday
I receive many cases, and some stand out. During a school holiday two years ago, a parent complained that their child was “always on the phone.” Games. YouTube. Endless scrolling. The usual frustration. The solution they tried was the usual one to confiscate the phone, restrict Wi-Fi, and threaten punishment. It worked for a day. Then the cycle returned. Instead, we tried something different. We asked, What if screen time is not the problem, but the waste of it is? The child was given a basic cybersecurity challenge. Nothing dramatic. Just simple tasks. How passwords are guessed. How fake links work. Why accounts get hacked. How to spot a scam message. The phone was no longer just for consumption. It became a tool. Within days, behaviour changed. The child started explaining phishing to siblings. They began questioning suspicious links. They stopped oversharing online, not because they were warned, but because they understood risk. Screen time did not reduce. It became purposeful. This is the reality many parents miss. Children are already digital. The danger is not the screen. The danger is ignorance. In another case, a teenager helped their parent avoid a mobile money scam. The message looked genuine. Same tone. Same urgency. But the child spotted the red flags immediately. Wrong link. Poor domain. Pressure language. Money saved. Lesson learned. Cybersecurity is not an IT subject. It is a life skill. At the Institute of Forensics & ICT Security (IFIS), we teach children to cross the road, not because we expect accidents, but because risk exists. The digital world is no different. Children are online earlier than ever, interacting with strangers, sharing data, clicking links, and building digital footprints they will live with for years. This holiday, the question is not whether your child will be on a screen. They will be. The question is whether they will leave the holiday with nothing but high scores and short videos or with a skill that builds confidence and judgment. Cybersecurity training does something powerful for children. It trains them to think before they click. To question before they trust. To understand systems, not just use them. It quietly builds discipline, curiosity, and responsibility. When school resumes, the difference shows. These children are not just users of technology. They are safer, sharper, and more aware. They do not panic online. They pause. This is how you future-proof a child. Not by banning technology, but by teaching mastery over it. Do not stop children from crossing the road. Teach them how to cross the road safely. Turn screen time into skill time. Enroll your child in cybersecurity training this holiday season. Because the digital world is not waiting The breach you never notice until it owns you Let me describe the most dangerous breach I have ever seen. No alarms went off. No systems went down. No ransom note appeared on the screen. Business continued as usual. That was the problem. In this case, a staff member clicked a harmless-looking link. Nothing happened. Or so it seemed. No files disappeared. No money moved. Everyone relaxed. What they did not know was that access had already been granted. For weeks, the attacker watched quietly. Emails. Approvals. Password resets. Internal conversations. Who reports to whom? How money is approved. Which controls are ignored when people are in a hurry? By the time the fraud happened, the breach was old news. The attacker did not break in loudly. They moved in politely. This is how most real breaches work. Not dramatic. Just silent. In another case, a company kept blaming staff for “carelessness.” But the truth was that the breach was not caused by one click. Habits caused it. Shared passwords. No monitoring. Outdated systems. Trust without verification. The breach was not technical. It was cultural. The most dangerous breaches are the ones you normalise. Small policy exceptions. One person with too much access. Alerts that are always ignored. Logs no one reads. Over time, attackers learn your organisation better than new employees do. And when they finally act, it looks like an insider job, because from the system’s point of view, it is. By the time money moves, reputations fall, or regulators arrive, the breach already owns you. It knows your weaknesses. It knows your delays. It knows your fear of disruption. This is why prevention is not enough. You must assume compromise and design for detection. Early. Constant. And relentlessly. Cybersecurity is not about stopping every attack. It is about spotting the quiet ones before they grow teeth. If you only react when systems fail, you are already late. The real question leaders should ask is simple. If someone got in today, how soon would we know? If the answer is “after damage,” then the breach is already ahead of you. The breach you never notice is the one that owns you.
Fight fraud. Build confidence. Get certified.
Some moments are unforgettable. I was sitting in a boardroom late in the evening, and the investigation was done. Everyone in that room knew fraud had happened because numbers told a clear story and the behaviour was obvious. However, when the report was projected on the screen, the room went quiet for the wrong reason. The word draft was all over the place. Pages were inconsistent, and conclusions appeared before evidence, screenshots were pasted without explanation, and dates did not align. There was no clear trail from fact to finding. The suspect’s lawyer did not need to argue much; however, the report argued against itself. And just like that, the case collapsed at the disciplinary hearing. Not because the suspect was innocent, but the investigator could not clearly connect the evidence, document it properly, or defend it with confidence. The suspect walked out. The organisation was left exposed, and good people paid the price. That day taught me something uncomfortable. Fraud does not survive because it is smart; it survives because investigations are often poorly done and poorly presented. Fraud is not just about money. It destroys trust, staff lose faith in leadership, and boards start doubting controls and Institutions become afraid to act. In Uganda, I have seen organisations live with known fraudsters simply because leadership feared losing in court or before regulators. Most professionals sense fraud long before they can prove it. This process keeps being bypassed, and numbers are always “rounded.” One person controls too much. A lifestyle does not match income, but instinct is not evidence. And a weak report can be more dangerous than no report at all. Confidence does not come from authority but comes from competence. Knowing how to structure an investigation, how to write a report that flows logically, how to move from evidence to conclusion without gaps, and how to face a board or a judge and calmly say, “This is what happened. This is how we know.” Certification changes how you show up. I have seen professionals transform. Stop guessing, stop over-explaining, and stop hiding behind jargon. Their reports become clear, thinking becomes sharp, and presence becomes steady. When things go wrong, organisations do not look for drama but look for evidence. So this is not about adding letters after your name, it’s about becoming dangerous to fraud and safe for the truth and protecting your organisation, your career, and your reputation. Fight fraud with skill. and build confidence. Through mastery. Register today for the January 2026 intake: https://forensicsinstitute.org/ifis-events-registration/ Get certified. Because in the real world, truth only survives when it is well-proven.
Transform your career with great investigation tools
Are you a victim of fraud? Are you working too hard and cannot see where the money goes? Are you an auditor or fraud investigator who would like to know how to collect water-tight evidence so that you become a darling of the prosecution and or your company team? Find the solution with us today. Whether you are a subject of an investigation or just staff who is at a company where fraud has taken place, you deserve to get peace of mind. Institute of Forensics & ICT Security (IFIS) helps you get all the facts so that you know the truth. In business, acting on opinion, hearsay, and audit reports is not advisable. As forensic investigators, we leave no stone unturned. Every transaction matters. Do you want to know the truth? Our forensics expert will help you know who did what, where, when, and how. We investigate IT, cyber, and general staff fraud. We will help you save on legal courts, protect your reputation, and first track your case. Never again will you beg staff to resign, even when you know they stole from you! Are some of your ex-employees threatening your reputation? Worry no more. We help dig all the facts. We conduct fraud and forensic investigations in line with the laws of Uganda. Our investigators are available to support your case as expert witnesses. We handle over 20 fraud investigation cases in Uganda, Kenya, and South Africa annually. The INVESTIGATE toolkit comes with: Customized report template to guide you to write a good court-admissible report Sample evidence collection templates so that you collect evidence that complies with the Evidence Act, laws of Uganda, or another country A fraud investigation methodology so that you know the step-by-step process of fraud investigations A suspect interview script template, so that you interview suspects and witnesses like a pro and get court-admissible evidence A statement-taking template/form so that you can create a strong case and be able to hold suspects or witnesses accountable as the investigation progresses And above all, tools to help you become an expert investigator, including the CFE fraud examination manual notes Want the peace of mind? Contact us today! To read more blogs, click here To download the 2026 IFIS Training Calendar, click here To be motivated, click here.
Digital frauds are killing businesses silently
We live in a digital world where basically the use of computers, the Internet, and other sophisticated technology is at its climax. Today, people pay bills, send and receive payments, review bank transactions and balances over the Internet, and use mobile phones. Such innovations are intended to make life easier and more comfortable. In less than 5 years, crime and fraud have changed from an anomaly of teenage vandals into a multi-billion-dollar industry. Every year, millions of people fall victim to such activities. There are many stories of cybercrime. Such hacking of some of America’s confidential information almost ruined diplomacy between many countries. Business experts argue that although it is very cheap and convenient to use online as a platform for business transactions, since most businesses have graduated from using traditional means of operation, the risks involved are immense. According to Mr. Mustapha Mugisa, a certified fraud examiner and the Former president of the Uganda chapter Association of Certified Fraud Examiners, digital fraud is one of the biggest emerging risks in Uganda as people embrace technology and its advancement. Uganda is currently in the take-off technology level, whereby every business is embracing technology and using the internet to reach out to the international market. In telecoms, insurance, and banking, for example, we have seen the emergence of mobile money transfer and mobile commerce (mCommerce) whereby people use their mobile phones to pay bills like water, electricity, DSTV, or even buy goods at the supermarkets. This is making our country ripe for high risks of digital fraud unless something is done now to protect the users. Such automation between many companies increases the risk of fraud because the more the interfaces, the higher the risks involved. Recently, almost all telecom companies operating in Uganda and offering mobile money services have entered into partnerships with other service providers to make money transactions easier and convenient, but this means that the scale of digital fraud increases to a greater magnitude. Unlike in the past, it was a bit difficult to steal a lot of information at once because manual documents are bulky. In most cases, fraud is directly or indirectly aided by internal staff, following the lack of training and sensitization on what digital fraud means and how to protect company information from social engineers. The use of technology is advancing daily, and there is no way one can avoid the wave of change. Companies should, however, train their staff and implement clear security policies – clearly explaining the forms of digital fraud, staff responsibilities in averting the same, and repercussions for failure to comply, among others. Companies should also aim to have a security-aware culture throughout the business,s as well as protecting their major digital resources from hackers, social engineers. Implementing secure firewalls to protect computers connected to the Internet should be a must across the board. It’s high time companies invest in digital security defense measures and ensure that they are integrated and adequately up to date to respond to the ever-changing and increasingly sophisticated technologies being utilized in such crime. Some people are connected to the Internet, but have not implemented security measures to block hackers.” He advises. With the introduction of wireless, a lot of people now prefer to work at the nearby cyber cafe since it is free, not knowing that they are at greater risk of being hacked into and revealing confidential company information, like business strategy, because the person controlling that network can access their profile easily. According to a study done by Pricewaterhouse Coopers’ Forensic Technology Solutions team, companies should use risk assessment strategies to ensure their investment is targeted towards those security controls that offer the greatest business benefit, as well as having a clear understanding of their legal rights and responsibilities in relation to digital fraud, and aim to implement internationally-recognized standards of best practice. Fraud in the digital world is not likely to end any sooner because the more the technology becomes sophisticated, the higher the risks involved. It’s high time the government, through regulatory bodies like the Bank of Uganda, stepped in to provide assurances of the security of emerging technologies like mobile money services, thereby reducing identity theft, fraud, money laundering, and embezzlement from companies. Take the lead in the fight against digital fraud Digital fraud will continue to evolve. The only question is: Will you evolve with it? Join the January 2025 Certified Fraud Examiner Class and position yourself as a fraud expert Uganda needs in this critical digital era. Secure your discounted spot today. Become a Certified Fraud Examiner. Protect the future. Enroll now — limited seats available. Download our 2026 Training Calendar: https://forensicsinstitute.org/download/ifis-training-calendar-2026/
Why every child needs cybersecurity skills this holiday
Let me begin with a simple truth that many parents in Uganda still underestimate. Your child is already living in a world you barely understand. Their friendships are digital. Their assignments are digital. Their identity is digital. Their future wealth will be digital. And their biggest risks? Also, digital. Pretending cybersecurity is an “adult issue” is the fastest way to raise a vulnerable child in a dangerous world. This holiday, while everyone else is preparing to teach children new dances, cooking lessons, or driving skills, I want us to think differently. Your child does not need more entertainment. They need digital self-defence. And the earlier they learn it, the safer their future. The moment a harmless game became an investigation Two weeks ago, during a cybersecurity training session, a mother approached me worried about her son, let us call him Peter, a slender boy, always wearing bright T-shirts and moving with the restless attention of someone raised on screens. He had clicked on a “free upgrade” link inside a mobile game. Within hours, his phone, the one he borrowed, started sending strange notifications. Data got consumed at an alarming rate. Contacts synced to unknown servers. That is how easily a child can compromise an entire family. This is not a theoretical risk. Children are now the softest entry point for cybercriminals into homes, offices and top security places. They trust too quickly. They click too fast. They share too openly. And they rarely tell their parents until it is too late. That is why cybersecurity is no longer an optional talent. It is a survival skill. Why your child is the new target Children are now the easiest way for criminals to enter your household. And they do it quietly. Children trust screens more than people they know. They do not evaluate risk, they follow excitement. Their digital footprints outlive childhood. Criminals design attacks specifically for young users’ behaviour. Every online profile your child creates today builds a lifelong digital identity. Mistakes made at age 10 can haunt them at 25 in job applications. If you are with your child, ask them to Google their own name and see how much of their identity is already public. The shock alone begins the learning. A case of a simple school assignment gone wrong Subject B, a tall girl around 11, usually wearing her school uniform even during holidays, downloaded a “free PDF converter” for her homework. Hidden inside was spyware. It captured keystrokes, screen activity, and camera access. Her father did not believe it until we replayed a log showing screenshots of their living room taken without permission. Let us think differently. Children are not careless. They are untrained. Free tools often carry malicious add-ons. Children rarely read permission requests. Household devices become compromised silently. Attackers exploit educational habits, downloading, searching, sharing. Try it now. Ask your child to explain what each app on their phone does, what permissions it uses, and why. The gaps will reveal their vulnerabilities. The digital world your child walks into blindly We must stop pretending children are safe because they are “just on TikTok” or “only chatting with classmates.” Digital danger is not dramatic. It is quiet, gradual, and psychological. Social engineering now targets children first. Identity theft begins with small personal details. Cyberbullying leaves long-term behavioural scars. Strangers disguise themselves as peers effortlessly. What else to know? A child’s first online trauma shapes their confidence for life. I got a shock of my life when I did a simple online test at home. You too can try it. Run a mock phishing test at home. Send your child a harmless but deceptive link. Discuss why they clicked or didn’t click. This is the most practical lesson you can give. Why the holiday is the perfect time to train them Children learn best when routines are broken. Holidays give you uninterrupted time to build digital discipline without school pressure. Holidays expose children to longer screen time. Boredom pushes them into riskier digital spaces. Peer challenges increase unsafe behaviours. Supervision reduces; curiosity increases. Let us think differently. Cybersecurity is not about fear. It is about empowerment. Spend one hour daily teaching them simple tasks, checking browser history, identifying fake URLs, and turning on two-factor authentication. Or you can enrol them at the Institute of Forensics & ICT Security holiday program. This small routine builds lifelong digital instincts. The skills every child must have by the New Year If you want your child to thrive in a future driven by AI, automation, and digital identity, these baseline skills are non-negotiable. How to identify suspicious links and apps. How to create strong, memorable passwords. How to configure privacy settings on every platform they use. How to recognise manipulation—emotional, social, and digital. What else to know? These are the same skills required of cybersecurity professionals. Your child is building a career foundation without even realising it. Activity: Ask your child to “teach you” how to protect your phone. When they teach, they learn twice. We must face a hard truth. The school system is not built for the digital age. The curriculum is slow, the risks are fast. Teachers are overwhelmed; children are overexposed. Keep in mind that Most schools lack cybersecurity programs. Teachers often do not understand modern threats. ICT lessons focus on typing, not self-defence. Parents wrongly assume schools provide safety. Parental leadership is the new frontline of cyber protection. I encourage parents to create a weekly “digital briefing” with their child. Review their digital week the same way CEOs review operations. A deeper look at Subject A and Subject B Remember our two cases? Subject A’s curiosity led to malware infiltration that compromised household contacts. Subject B’s homework tool gave a stranger silent access to her home. The lesson? Cybersecurity failure rarely looks dangerous at first. It looks innocent. It looks harmless. It looks like “something small.” Children do not understand digital consequences. Small digital mistakes have a real-world impact and leave lifelong pain Attackers

