Artificial Intelligence (AI) & Cybercrime: Interrelated or Co-Assist?

arrow down

Artificial Intelligence (AI) & Cybercrime: Interrelated or Co-Assist?

Published on
December 19, 2025

The dual nature of AI

Artificial Intelligence (AI) has become a transformative force in financial services. Within the payments and remittance ecosystem, it enhances efficiency, strengthens fraud detection, and supports compliance. Yet, the same intelligence can also be misused. The technology that helps protect customers and institutions is equally capable of enabling new forms of crime.

This duality where innovation and exploitation advance side by side has captured the attention of regulators across the region. Both the Monetary Authority of Singapore (MAS) and Bank Negara Malaysia (BNM) now place strong emphasis on technology risk management and cybersecurity governance to ensure that the benefits of AI do not come at the cost of trust and integrity.

AI as an enabler of progress and protection

AI’s influence on the payments and remittance space has been overwhelmingly positive. It has transformed how financial institutions detect fraud, assess risk, and monitor transactions. Machine learning models analyse patterns that would otherwise go unnoticed, identifying irregularities before they escalate into losses.

Customer onboarding has also evolved. AI-driven systems verify identities, cross-check sanctions lists and authenticate documents with remarkable accuracy. Predictive analytics helps institutions anticipate suspicious activity, while automation improves the speed and consistency of compliance reviews.

Regulators have welcomed these developments but insist on strong safeguards. In Singapore, MAS requires all payment institutions to maintain cybersecurity resilience under the Cyber Hygiene and the Technology Risk Management (TRM) Guidelines. These frameworks mandate key controls including multi-factor authentication, patch management, access security, and incident response to ensure systems remain robust. The revised PSN01 Guidelines further reinforce these principles, linking data protection and operational resilience directly to compliance governance.

In Malaysia, BNM’s Risk Management in Technology (RMiT) Policy Document sets equally high expectations. It requires financial institutions and money service businesses to adopt layered cybersecurity controls, regular vulnerability testing, and clear board accountability for technology risks. BNM’s recent thematic review complements this by requiring institutions to withstand, adapt to, and recover from cyber disruptions in a timely manner.

Together, these frameworks create a regional standard where technological innovation is supported but always underpinned by security and governance.

AI as a weapon for cybercrime

As AI strengthens defences, it simultaneously equips criminals with more sophisticated tools. Cyber attackers now use AI to automate phishing campaigns, create deepfake impersonations, and produce convincing synthetic identities. Fraudulent transactions can be disguised through algorithms that mimic legitimate behaviour, making detection increasingly difficult.

Both MAS and BNM have highlighted this evolving risk. AI, when used maliciously, expands the 'attack surface' of an organisation from data theft and system manipulation to social engineering exploits. MAS’s TRM Guidelines emphasise ongoing monitoring and validation of AI models, while BNM’s RMiT requires independent testing and governance over emerging technologies, including machine learning.

This constant escalation has turned AI into both a shield and a sword — advancing defences but also deepening the complexity of threats.

An interrelated and co-assist relationship

The connection between AI and cybercrime is not simply adversarial, it is interrelated. Every improvement in AI-based security triggers a corresponding evolution in AI-driven attack methods. Institutions within the payments and remittance ecosystem use AI to identify abnormal behaviour, while criminals employ AI to emulate legitimate customer profiles or test system vulnerabilities.

This dynamic creates a cycle of adaptation — an ongoing technological contest where innovation on one side drives innovation on the other. The outcome depends on governance: those who deploy AI responsibly and securely will stay ahead, while those who fail to manage its risks may find it turned against them.

Impact on the payments and remittance ecosystem

For payment service providers and remittance operators, this trend has several effects. First, cybersecurity and technology controls are no longer side issues — they are central to running the business and keeping a licence. Both MAS and BNM expect institutions to show clearly how they build cybersecurity into their work, from how they onboard customers to how their systems are designed and maintained.

Second, the human element remains indispensable. While AI automates decision making, human oversight ensures that judgment, ethics, and context are not lost. Regulators in both countries continue to stress that accountability cannot be delegated to an algorithm.

Finally, cross-border cooperation is becoming increasingly important. Many remittance providers operate in both Singapore and Malaysia, and maintaining consistent cybersecurity standards between jurisdictions is essential. A weak point in one market can compromise the entire payment chain.

arrow