How to Win The Deep Fake Arms Race

Omar Dapul, Founder and Chief Growth Officer - Deepfaic

Deepfake Fraud, AI Identity Risk and the Future of Trust in FinTech

The latest episode of FinTech Focus TV, Toby welcomes back a long-time friend of the business, Omar Dapul, now CEO and Co-Founder of the Singapore-based deep tech startup Deepfaic. While Omar appeared on the show in his CARDO AI days, this conversation marks the unveiling of a newly formed company tackling one of the most urgent issues facing global financial services: the rise of deepfake-enabled fraud, AI identity manipulation, and the erosion of digital trust.

Speaking from Singapore, Omar joins Toby to explore how deepfake technology has evolved, why financial institutions are increasingly exposed, and how Deepfaic’s mission is rapidly gaining traction across payments, cyber security, onboarding, KYC, and even talent acquisition. As FinTech recruitment specialists, Harrington Starr recognises just how critical trust, identity verification, and fraud prevention are to the future of the sector, and this episode provides a powerful insight into how the landscape is changing.

How Deepfaic Emerged: A New Frontier in AI Fraud Prevention and Digital Identity

Omar opens the conversation with the story behind Deepfaic, spelled DeepFAIC as a deliberate play on words, swapping “fake” for “FAIC” to reflect the company’s AI-driven approach. The startup is only nine months old, but the foundations were laid long before through Omar’s work at Black Mountain Systems, Allvue Systems, and CARDO AI.

After relocating to Singapore in 2022, Omar began consulting and investing while engaging with Singapore’s deep-tech ecosystem, particularly A*STAR, the national research agency comparable to DARPA in the US or GCHQ in the UK. Through A*STAR, Omar met the researcher who would become his co-founder. They were developing advanced computer vision models trained on more than seven million data samples, specifically designed to detect deepfake manipulation in images, video, and voice.

The topic resonated deeply with both founders, not only because of its commercial potential, but because of its real-world implications. As parents, they were witnessing the rise of deepfake misuse in schools, including revenge pornography and identity spoofing. They were also noticing a surge in deepfake-enabled fraud across the financial sector, accelerated by the generative AI boom.

These concerns, coupled with the strength of the technology being developed at A*STAR, inspired Omar and his co-founder to licence the underlying models, build new capabilities around them, and launch Deepfaic with a mission to protect businesses from fraud, manipulation, and the growing inability to trust what, and who, we see online.

Why Deepfake Fraud Is Becoming a Major Threat to Financial Services and Global Payments

Throughout the episode, Toby and Omar emphasise that trust is the backbone of financial services, and deepfake technology strikes at the heart of that trust. The conversation makes clear that this is no longer a hypothetical concern: financial institutions are already losing millions to highly convincing AI-generated identity fraud.

One of the most alarming examples discussed is the now-famous Hong Kong deepfake fraud incident, where a finance director was tricked into wiring US$25 million after attending what he believed was a legitimate video meeting with his senior leadership team. In reality, every person on that call, the CFO, CEO, and other supposed executives, was a deepfake. The fraudster orchestrated the entire scenario using AI-generated video and voice.

A similar case occurred more recently in Malaysia, where a CFO authorised a US$500,000 transfer after receiving voice and video confirmation from what appeared to be a genuine counterpart. Only JP Morgan’s settlement checks prevented the loss.

These cases illustrate the core issue Omar describes as a “creeping degradation of the social contract online”. The growing sophistication of generative AI means that a video call, once considered one of the most secure forms of verification, can no longer be assumed trustworthy.

In an industry like FinTech, already grappling with cyber crime, AML, onboarding pressures, and digital transformation, deepfake fraud presents a new and urgent risk. Omar explains that Deepfaic’s aim is to become as essential as antivirus software, embedded into the everyday fabric of digital interactions to maintain trust as technology evolves.

EKYC, Customer Onboarding and Fraud Prevention: The Next Battleground for FinTech

When discussing the FinTech applications of Deepfaic’s technology, Omar highlights EKYC (Electronic Know Your Customer) and customer onboarding as major areas of focus. These are critical processes in payments firms, challenger banks, trading platforms, and financial institutions that face escalating fraud attempts and rising regulatory scrutiny.

Omar breaks down the history of EKYC checks to demonstrate how each technological improvement eventually becomes vulnerable to misuse. Initially, customers uploaded a photo of their passport or driving licence. When fraudsters began bypassing this with printed images, companies introduced liveness checks, asking applicants to move their head or turn left and right. When AI learned to imitate that movement, firms added colour-flashing tests to detect blood vessel reflections, but advanced AI can now replicate that too.

The industry is caught in what Omar calls an “arms race”, where every security enhancement is soon met with a new AI workaround.

Deepfaic’s technology sits as a protective layer that detects when AI manipulation is being used during onboarding, during follow-up verification calls, and during high-value transaction checks. For FinTech firms, particularly those processing hundreds of thousands of applications per day, this level of protection is becoming increasingly vital.

Why Call Centres, BPOs, and Voice Authentication Are at Risk

Another major concern Omar identifies is the rapid adoption of AI in call centres and outsourced customer service operations. While AI agents, transcription tools, and automation improve efficiency, they also introduce a new vulnerability: AI-assisted voice impersonation.

A human agent might instinctively detect a fake voice, but AI tools listening to AI voices do not necessarily recognise manipulation. As more customer service processes become digitised, the window for fraud through voice alone widens significantly.

Omar explains that Deepfaic is engaging with BPO providers and enterprise call centres to address exactly this risk. The technology doesn’t just analyse video, it can detect audio-based manipulation in real time, alerting agents when deepfake voice technology is being used to bypass standard authentication checks.

In a global FinTech market defined by digital payments, 24/7 support, and remote customer interactions, the ability to authenticate voice integrity in real time is rapidly becoming a competitive necessity.

Deepfake Risks in Recruitment: A Growing Threat to Talent Acquisition and the FinTech Workforce

One of the most fascinating and unexpected parts of the conversation is the discussion about deepfake misuse in hiring, especially within global technology and FinTech recruitment. As a FinTech recruitment business, Harrington Starr recognises that hiring the right people is one of the most critical drivers of organisational success, and also one of the most vulnerable areas to fraud.

Omar explains that as remote work and digital hiring processes expand, people become more willing to “bend the rules” online than they would in person. Candidates exaggerate CVs, outsource interviews to more qualified friends, or take advantage of camera-off technical tests. In some cases, individuals accept multiple remote jobs simultaneously, a phenomenon known as “job farming”, and use AI tools to participate in interviews on behalf of others.

To combat this, Deepfaic has created Trudy and Truman, meeting bots that run quietly in the background of video interviews. During the FinTech Focus TV recording, Trudy was active on the call, ensuring neither participant was using digital manipulation such as video overlays, facial swaps or AI voice alteration.

If manipulation is detected, Trudy can send a warning message or terminate the call entirely.

This capability is powerful for FinTech employers who rely heavily on remote interviews to secure high-level engineering, product, cyber security, quantitative finance, and data talent. The financial and operational risk of hiring the wrong candidate, especially into highly technical or regulated roles, can be significant. Deepfaic’s tools allow firms to validate identity, reduce fraudulent behaviour, and maintain trust in talent acquisition processes.

For recruitment firms like Harrington Starr, this technology reinforces a core principle: businesses grow when they hire and retain the right people. Deepfake tools now threaten the integrity of that process, and Deepfaic is positioning itself at the forefront of ensuring fair, safe, and authentic recruitment.

The Founder Journey: Building a Deep Tech Startup in Singapore

The conversation shifts into Omar’s reflections on becoming a founder. Despite his vast experience building teams and scaling businesses at Black Mountain and Cardo AI, this is his first time starting a company at true “day zero”. He describes the journey as bloody hard, but incredibly rewarding, emphasising the importance of having a co-founder to share responsibility and challenge decisions.

In early-stage startups, Omar notes, there is “no one to hide behind”. Every win and setback is personal. But the validation Deepfaic is receiving from investors, potential clients, and industry stakeholders reinforces his confidence that they are solving a meaningful global problem.

He also praises Singapore as an exceptional launchpad for deep-tech ventures, citing its strong government support, innovation ecosystem, access to infrastructure, and proximity to key APAC financial markets. Although it may never rival Silicon Valley in scale, Singapore is increasingly becoming a hub for AI, data science, and cyber security, all of which align perfectly with Deepfaic’s mission.

Trust, Purpose and a Safer Digital Future for FinTech

By the end of the episode, Toby highlights something that resonates deeply with FinTech leaders, hiring managers, and technologists: purpose matters. Deepfaic’s mission is not just commercial. It’s about restoring trust in digital interactions at a time when AI makes it increasingly easy to deceive.

Financial crime, identity theft, talent manipulation, and online misinformation are accelerating. Deepfaic is building the tools that allow businesses, from global payment firms to recruitment agencies, to continue operating securely, transparently, and confidently.

Omar’s closing message reinforces this purpose: AI can just as easily erode trust as enhance it. Deepfaic is fighting to ensure that identity, authenticity, and integrity remain core pillars of the digital economy.

How to Learn More About Deepfaic

Omar encourages viewers to explore the website deepfaic.com, where firms can request demos, test the meeting bots, submit samples, and experiment with the technology. Deepfaic offers free trials and direct engagement with the founding team to help companies understand how deepfake detection fits into their digital risk strategy.

As Toby reflects, this technology is not just timely, it is becoming essential. Whether preventing financial fraud, securing onboarding processes, protecting customer service operations, or ensuring genuine talent acquisition, Deepfaic’s work is shaping the future of trust in FinTech.

 

Site by Venn