Here is our pick of the 3 most important XBRL news stories this still slow summertime week. 1 Digital Regulatory Reporting Rules Have you been involved in a regulatory reporting project for OTC (ISDA) products? In my career I took responsibility for implementing EMIR reporting for a well known UK CCP, so I have first […]
Healthcare cost and availability, and business interruption cover litigation- strange bedfellows with commonality- overarching need within economies for resolution for each multi-hundred billion dollar/euro/rupee/pound issue. Ironies abound- enormous need for healthcare due to COVID-19 should prompt revenue growth for providers, and closures of businesses due to government dictates due to COVID should prompt a rescuing […]
The post Curation of news pieces suggests no easy Cure for COVID Costs appeared first on Daily Fintech.
I landed in the UK about 14 years ago. I remember my initial months in the UK, when I struggled to get a credit card. This was because, the previous tenant in my address had unpaid loans. As a result, credit agencies had somehow linked my address to credit defaults.
It took me sometime to understand why my requests for a post paid mobile, a decent bank account and a credit card were all rejected. It took me longer to turn around my credit score and build a decent credit file.
I wrote a letter to Barclays every month, explaining the situation until one fine day they rang my desk phone at work to tell me that my credit card had been approved. It was ironical because, I was a Barclays employee at that time. I started on the lowest rungs of the credit ladder for no fault of mine. Times (should) have changed.
Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks and a whole suite of methodologies to make clever use of customer data have been on the rise. Many of these techniques have been around for several decades. However, only in recent times have they become more mainstream.
The social media boom has created data at an unforeseen scale and pace that the algorithms have been able to identify patterns and get better at prediction. Without the vast amount of data we create on a daily basis, machines lack the intelligence to serve us. However, machines rely on high quality data to produce accurate results. As they say, Garbage in Garbage out.
Several Fintechs these days are exploring ways to use AI to provide more contextual, relevant and quick services to consumers. Gone are the days when AI was considered emerging/deep tech. A strong data intelligence capability is nowadays a default feature of every company that pitches to VCs.
As AI investments in Fintech hit record highs, it’s time the regulators started thinking about the on-the-ground challenges of using AI for financial services. The UK’s FCA have partnered with Alan Turing Institute to study explainability and transparency while using AI.
Three key scenarios come up, when I think about what could go wrong in the marriage of Humans and Machines in financial services.
- First, when a customer wants a service from a Bank (say a loan), and a complex AI algorithm comes back with a “NO”, what happens?
- Will the bank need to explain to the customer why their loan application was not approved?
- Will the customer services person understand the algorithm enough to explain the rationale for the decision to the customer?
- What should banks do to train their staff to work with machines?
- If a machine’s decision in a critical scenario needs to be challenged, what is the exception process that the staff needs to use?
- How will such exception process be reported to the regulators to avoid malpractice from banks’ staff?
- Second, as AI depends massively on data, what happens if the data that is used to train the machines is bad. By bad, I mean biased. Data used to train machines should not only be accurate, but also representative of real data. If a machine that is trained by bad data makes wrong decisions, who will be held accountable?
- Third, Checks and controls need to be in place to ensure that regulators understand a complex algorithm used by banks. This understanding is absolutely essential to ensure technology doesn’t create systemic risks.
From a consumer’s perspective, the explainability of an algorithm deciding their credit worthiness is critical. For example, some banks are looking at simplifying the AI models used to make lending decisions. This would certainly help bank staff understand and help consumers appreciate decisions made by machines.
There are banks who are also looking at reverse engineering the explainability when the AI algorithm is complex. The FCA and the Bank of England have tried this approach too. A complex model using several decision trees to identify high risk mortgages had to be explained. The solution was to create an explainability algorithm to present the decisions of the black box machine.
The pace at which startups are creating new solutions makes it harder for service providers. In recent times I have come across two firms who help banks with credit decisions. The first firm collected 1000s of data points about the consumer requesting for a loan.
One of the points was the fonts installed on the borrowers laptop. If the fonts were used in gambling websites, the credit worthiness of the borrower took a hit. As the font installed indicated gambling habits, the user demonstrated habits that could lead to poor money management.
The second firm had a chatbot that had a conversation with the borrower and using psychometric analysis came up with a score. The score would indicate the “intention to repay” of the customer. This could be a big opportunity for banks to use in emerging markets.
Despite the opportunities at hand, algorithms of both these firms are black boxes. May be it’s time regulators ruled that technology making critical financial decisions need to follow some rules of simplicity or transparency. From the business of creating complex financial products, banks could now be creating complex machines that make unexplainable decisions. Can we keep the machines under check?
I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.
Subscribe by email to join Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).
The post FinServ in the age of AI – Can the FCA keep the machines under check? appeared first on Daily Fintech.
Too many TLAs (Three Letter Acronyms), I agree. Earlier this week the Financial Conduct Authority (FCA) published the results of a pilot programme called Digital Regulatory Reporting. It was an exploratory effort to understand the feasibility of using Distributed Ledger Technology (DLT) and Natural Language Processing (NLP) to automate regulatory reporting at scale.
Let me describe the regulatory reporting process that banks and regulators go through. That will help understand the challenges (hence the opportunities) with regulatory reporting.
- Generally, on a pre-agreed date, the regulators release templates of the reports that banks need to provide them.
- Banks have an army of analysts going through these templates, documenting the data items required in the reports, and then mapping them to internal data systems.
- These analysts also work out how the bank’s internal data can be transformed to arrive at the report as the end result.
- These reports are then developed by the technology teams, and then submitted to the regulators after stringent testing of the infrastructure and the numbers.
- Everytime the regulators change the structure or the data required on the report, the analysis and the build process have to be repeated.
I have super simplified the process, so it would help to identify areas where things could go wrong in this process.
- Regulatory reporting requirements are often quite generic and high level. So interpreting and breaking them down into terms that Bank’s internal data experts and IT teams understand is quite a challenge, and often error prone.
- Even if the interpretation is right, data quality in Banks is so poor that, analysts and data experts struggle to identify the right internal data.
- Banks’ systems and processes are so legacy that even the smallest change to these reports, once developed, takes a long time.
- Regulatory projects invariably have time and budget constraints, which means, they are just built with one purpose – getting the reports out of the door. Functional scalability of the regulatory reporting system is not a priority of the decision makers in banks. So, when a new, yet related reporting requirement comes in from the regulators, banks end up redoing the entire process.
- Manual involvement introduces errors, and firms often incur punitive regulatory fines if they get their reports wrong.
- From a regulator’s perspective, it is hard to make sure that the reports coming in from different banks have the right data. There are no inter-bank verification that happens on the data quality of the report.
Now, to the exciting bits. FCA conducted a pilot called “Digital Regulatory Reporting” with six banks, Barclays, Credit-Suisse, Lloyds, Nationwide, Natwest and Santander. The pilot involved the following,
- Developing a prototype of a machine executable reporting system – this would mitigate risks of manual involvement.
- A standardised set of financial data definitions across all banks, to ensure consistency and enable automation.
- Creating machine executable regulation – a special set of semantics called Domain Specific Language (DSL) were tried to achieve this. This functionality was aimed at rewriting regulatory texts into stripped down, structured, machine readable formats. A small subset of the regulatory text was also converted to executable code, from regulatory texts based on this framework.
- Using NLP to parse through regulatory texts and automatically populate databases that regulatory reports run on.
If the above streams of efforts had been completely successful, we would have a world of regulators creating regulations using DSL standards. This would be automatically converted to machine executable code, and using smart contracts be executed on a Blockchain. NLP algorithms input data into the reporting data base, which will be ready with the data when the smart contracts were executed. On execution, the reports will be sent from the banks to the regulators in a standardized format.
This would have meant a few Billions in savings for UK banks. On average, UK banks spend £5 Billion per year on regulatory programmes. However, like most pilots, only part of the programme could be terms as successful. Bank’s didn’t have the resources to complete all the above aspects of the pilot successfully. They identified the following drawbacks.
- Creating regulatory text in DSL, so that machines can automatically create and execute code, may not be scalable enough for the regulators. Also, if the creation of code is defective, it would be hard to hold someone accountable for error prone reports.
- NLP required a lot of human oversight to get to the desired level of accuracy in understanding regulatory texts. So, human intervention is required to convert it to code.
- Standardising data elements specific to a regulator was not a viable option, and the costs involved in doing so is prohibitive.
- While the pilot had quite a few positive outcomes and learnings, moving from pilot to production would be expensive.
The pilot demonstrated that,
- A system where regulators could just change some parameters at their end and re-purpose a report would enable automated regulatory reporting.
- Centralizing processes that banks currently carry out locally, create significant efficiencies.
- Dramatic reduction in the time and cost of regulatory reporting change.
- Using DLT could reduce the amount of data being transferred across parties, and create a secured infrastructure.
- When data is standardised into machine readable formats, it removes ambiguity and the need for human interpretation, effectively improving quality of data and the reports.
In a recent article on Robo-Regulators, I highlighted the possibilities of AI taking over the job of a regulator. That was perhaps more radical blue-sky thinking. However, using NLP and DLT to create automated regulatory reporting definitely sounds achievable. Will banks and the regulators be willing to take the next steps in moving to such a system? Watch this space.
Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email
Regtech is a £50 Billion per year opportunity, and that is just in the UK. That is due to the hundreds of millions of pages in regulatory texts that firms have to deal with, to be compliant. It is critical that firms equip themselves with technology solutions that will help them navigate through the complex world of regulation.
Please note that while Regtech covers regulations across industries, I am taking the liberty of using this term loosely to refer to FS based Regtech use cases.
During my time at PwC, I was involved in evaluating AI products for their Legal and Regulatory offerings. We were looking into IBM Watson, and had some interesting conversations on sending Watson to school to learn Legal and Regulatory language (in English). The AI engine (deep learning, NLP) would then be able to provide guidelines to firms in plain English on what was needed for regulatory compliance.
It has been almost five years since then and we have seen various developments across the globe. Regtech has never been more relevant. US and Europe have more than 200 Regtech firms, as these two regions are clearly seen as the pioneers of financial services regulation.
‘The FCA is the most innovative regulator in the world in terms of using new technologies and the other regulators look up to them”
In my opinion, Europe and in particularly the UK’s FCA are world leaders in working with innovative ways of achieving regulatory compliance. Be it payments, open banking or crypto currencies, they have taken a collaborative approach in nurturing the right firms. 37% of Regtech investments across the globe happen in the UK.
But its the happenings in Asia that I find more interesting from a Regtech stand point.
Fintech India has seen massive growth with digital payments being well backed by policies and technology infrastructure. The rise of PayTM, UPI and more recently Google Tez have all helped in bringing the total transaction volume of digital payments to $50 Billion. But with growth comes greed, and regulations have to kick in. There were tens of P2P lending firms in India until the Reserve Bank of India (RBI) launched their regulatory framework for P2P lending in Q4 2017. There are now only a handful of well capitalised P2P lending platforms.
There is a lot of work to be done around automation of transaction reporting. For example, the Microfinance market in India is still largely cash based and reporting is manual. There are startups trying to disrupt this space with cloud enabled smart phone apps, that allow for real time reporting of transactions, when an agent is on the ground collecting money from a farmer. This allows for massive gains in operational efficiency, curbs corruption, but more importantly helps transaction reporting so much easier.
I see India as a market, where Regtechs can help the RBI develop a regulatory framework across Financial Services.
China’s P2P lending market is worth about $200 Billion. Recent frauds like Ezubao, where about a million investors lost $9 Billion, indicate that the market needs to have strong regulatory controls. The scam led to a collapse of the P2P lending market in China. A regulatory framework that helps bring credible players to this space, well supported by a bunch of top Regtechs will help the status quo.
Singapore is the destination for Regtechs in Asia – without a doubt. After the US and the UK, Singapore attracts the most investments into Regtech firms. The support that Monetary Authority of Singapore (MAS) provides to budding startups is the real differentiation that Singapore has over Hongkong as a Fintech hub.
MAS have recently tied up with CFTC (Commodity Futures Trading Commission) in the US to share the findings of their Sandbox initiative. Such relationships between regulators help keep regulatory frameworks aligned across jurisdictions . So, when a Fintech is looking to expand beyond borders, they don’t have to rethink operational, strategic or technology aspects for the new jurisdiction and they can focus on what matters – the consumers.
As Fintech evolves over the next few years, there are several ways in which Banks, Insurance providers, asset managers and regulators can work in partnership with Regtech firms. In some areas, these firms will piggyback off what the incumbents have or haven’t done.
There is often a rule of thumb in the top consulting firms – build propositions in an area where there is fire. In other words, if a client has a major issue that could cost them money and/or reputation, come up with a solution for that. This is particularly true with Regtech firms, where they focus on an area that has a serious lack of control and governance.
However, in many parts of the world, there is a genuine opportunity for Regtechs to go a step further and define the controls in collaboration with the regulators, and perhaps ahead of the regulators.
Arunkumar Krishnakumar is a VC investor focusing on Inclusion, a writer and a speaker.
Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email.