Flood risk AI is now a reliable tool- is there a desire to put it to work?

Flood insurance has been a wallflower at the coverage dance- an eager participant but not able to find a suitable partner.  Innovation efforts have found suitable risk prediction partners for carriers- FloodMapp, Hazard Hub, and Previsco among others- but is the flood insurance market ready? Politics, inertia, customer preferences and regulation might keep the music […]

The post Flood risk AI is now a reliable tool- is there a desire to put it to work? appeared first on Daily Fintech.

FinServ in the age of AI – Can the FCA keep the machines under check?

Zz0yZGVlNWFjNzUyNjgwYjFmMDc2NzMyNWM0MGQyZTYzMA==

Image Source

I landed in the UK about 14 years ago. I remember my initial months in the UK, when I struggled to get a credit card. This was because, the previous tenant in my address had unpaid loans. As a result, credit agencies had somehow linked my address to credit defaults.

It took me sometime to understand why my requests for a post paid mobile, a decent bank account and a credit card were all rejected. It took me longer to turn around my credit score and build a decent credit file.

I wrote a letter to Barclays every month, explaining the situation until one fine day they rang my desk phone at work to tell me that my credit card had been approved. It was ironical because, I was a Barclays employee at that time. I started on the lowest rungs of the credit ladder for no fault of mine. Times (should) have changed.

Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks and a whole suite of methodologies to make clever use of customer data have been on the rise. Many of these techniques have been around for several decades. However, only in recent times have they become more mainstream.

The social media boom has created data at an unforeseen scale and pace that the algorithms have been able to identify patterns and get better at prediction. Without the vast amount of data we create on a daily basis, machines lack the intelligence to serve us. However, machines rely on high quality data to produce accurate results. As they say, Garbage in Garbage out.

Several Fintechs these days are exploring ways to use AI to provide more contextual, relevant and quick services to consumers. Gone are the days when AI was considered emerging/deep tech. A strong data intelligence capability is nowadays a default feature of every company that pitches to VCs.

As AI investments in Fintech hit record highs, it’s time the regulators started thinking about the on-the-ground challenges of using AI for financial services. The UK’s FCA have partnered with Alan Turing Institute to study explainability and transparency while using AI.

Three key scenarios come up, when I think about what could go wrong in the marriage of Humans and Machines in financial services.

  • First, when a customer wants a service from a Bank (say a loan), and a complex AI algorithm comes back with a “NO”, what happens?
    • Will the bank need to explain to the customer why their loan application was not approved?
    • Will the customer services person understand the algorithm enough to explain the rationale for the decision to the customer?
    • What should banks do to train their staff to work with machines?
    • If a machine’s decision in a critical scenario needs to be challenged, what is the exception process that the staff needs to use?
    • How will such exception process be reported to the regulators to avoid malpractice from banks’ staff?
  • Second, as AI depends massively on data, what happens if the data that is used to train the machines is bad. By bad, I mean biased. Data used to train machines should not only be accurate, but also representative of real data. If a machine that is trained by bad data makes wrong decisions, who will be held accountable?
  • Third, Checks and controls need to be in place to ensure that regulators understand a complex algorithm used by banks. This understanding is absolutely essential to ensure technology doesn’t create systemic risks.

From a consumer’s perspective, the explainability of an algorithm deciding their credit worthiness is critical. For example, some banks are looking at simplifying the AI models used to make lending decisions. This would certainly help bank staff understand and help consumers appreciate decisions made by machines.

There are banks who are also looking at reverse engineering the explainability when the AI algorithm is complex.  The FCA and the Bank of England have tried this approach too. A complex model using several decision trees to identify high risk mortgages had to be explained. The solution was to create an explainability algorithm to present the decisions of the black box machine.

The pace at which startups are creating new solutions makes it harder for service providers. In recent times I have come across two firms who help banks with credit decisions. The first firm collected 1000s of data points about the consumer requesting for a loan.

One of the points was the fonts installed on the borrowers laptop. If the fonts were used in gambling websites, the credit worthiness of the borrower took a hit. As the font installed indicated gambling habits, the user demonstrated habits that could lead to poor money management.

The second firm had a chatbot that had a conversation with the borrower and using psychometric analysis came up with a score. The score would indicate the “intention to repay” of the customer. This could be a big opportunity for banks to use in emerging markets.

Despite the opportunities at hand, algorithms of both these firms are black boxes. May be it’s time regulators ruled that technology making critical financial decisions need to follow some rules of simplicity or transparency. From the business of creating complex financial products, banks could now be creating complex machines that make unexplainable decisions. Can we keep the machines under check?


Arunkumar Krishnakumar is a Venture Capital investor at Green Shores Capital focusing on Inclusion and a podcast host.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

Subscribe by email to join Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).


 

 

 

The post FinServ in the age of AI – Can the FCA keep the machines under check? appeared first on Daily Fintech.

Numerai a small cap AI Blockchain gem

Blockchain and AI are the most trending technologies. Blockchain for Finance and AI for Finance ventures are also increasing. The combination is hoped to fuel the autonomous financial infrastructure that will host all kinds of intelligent applications in capital and financial markets.

LiveTiles-Blockchain-Infographic-E

LiveTiles brought to my attention 20 AI Blockchain projects with a great infographic. As I have profiled a few of them in 2017 at the protocol layer and the data-finance verticals, I decided to catchup with Numerai. They had grabbed my attention 2 years ago in this primer I wrote: The Big Hairy Audacious Goal of Numerai: network effects in Quant trading

Screen Shot 2019-06-02 at 16.59.44Numerai is creating a meta-model from all the Machine Learning (ML) algorithms developed by “the crowd” with cryptographic data. Numerai aims to offer a platform that generates alpha in a novel way. It wants to structure a rewarding mechanism for its traders that not only eliminates the typical competitive and adversarial behavior between them but actually, penalizes them.                              Efi Pylarinou

Numerai was and is a bleeding edge venture. It remains the only hedge fund built on blockchain and using ML and data science in a novel way. The novelty lies in changing the incentive and compensation structure of the fund manager.

Numerai launched no ICO. The NMR token was awarded to the thousands of data scientists for creating successful machine-learning based predictive models.  Once the data scientists are confident of the predictive ability of their model, they can stake their NMR and earn additional NMR if they are correct.

Numerai involves a staking mechanism.

In March, Numerai reported that $10million had been rewarded up to date. NMR tokens were distributed via airdrops initially. At launch on 21st February 2017, 1 million Numeraire tokens (NMR) were distributed to 12,000 anonymous scientists.  Thereafter, NMR  tokens were awarded as rewards to users of its platform. Bear in mind, that if a participant stakes NMR and their model doesn’t perform, the staked tokens are burnt.

According to Numerai, the NMR token is one of the most used ERC20 tokens. By end of 2018 reporting 25,000 stakes of NMR.

Numerai II.pngSource

Almost 200,000 models submitted by data scientists around the world for a competition to crowdsourced the best prediction models.

Screen Shot 2019-06-02 at 18.52.49Source from Chris Burniske`s talk at Fluidity Summit in NYC.

Numerai in March raised $11mil from investors led by Paradigm and Placeholder VCs. Numerai is a very rare case because this fundraising is not for equity but for NMR tokens.

Numerai token is a utility token and investors just bought $11million of NMR tokens.

The funds raised will primarily be used to drive the development of Erasure, a decentralized predictions marketplace that Numerai launched.

What does this mean in plain worlds?

Numerai was not a protocol but rather an application  – a hedge fund. Erasure will transform it into a protocol. This has several significant implications.

  • NMR becomes a token on the protocol and can be used to build all sorts of applications on top of Erasure.
  • Numerai becomes decentralized. The NMR smart contract will no longer be controlled or upgraded by Numerai but by NMR token holders. So, NMR becomes a governance token.
  • Numerai will have no authority on the supply of NMR tokens.

A protocol is born out of the app Numerai – its name is Erasure. Erasure is much broader than a hedge fund, as all sorts of prediction and data markets can be built on the protocol. The vision is to always to be a token that is actually used. Which brings to the spotlight the lack of transparency around data measuring use of protocol and Dapp tokens.

 Footnote: Numerai at launch was backed by Fred Ehrsam, Joey Krug, Juan Benet, Olaf Carlson-Wee and Union Square Ventures.

Efi Pylarinou is the founder of Efi Pylarinou Advisory and a Fintech/Blockchain influencer.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

 Subscribe by email to join Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).

Not so fast, InsurTech- long-tailed and unique claims are the Kryptonite to your innovation super power

Nothing to fear, InsurTech Man! It’s just a busy claim!

Artificial intelligence, machine learning, data analysis,
ecosystem insurance purchases, online claim handling, application-based insurance
policies, claim handling in seconds, and so on. 
There’s even instant parametric travel cover that reimburses costs-
immediately- when one’s planned air flight is delayed.  There are clever new risk assessment tools
that are derived from black box algorithms, but you know what?  Those risk data are better than the industry
has ever had!  Super insurance, InsurTech
heroes!  But ask many insureds or claim
handlers, and they’ll tell you all about InsurTech’s weakness, the kryptonite
for insurance innovation’s superheroes (I don’t mean Insurance Nerd Tony Cañas)- those being-   long-tailed or unique claims.

If insurance was easy you wouldn’t be reading this.  That is simple; much of insurance is
not.  Determining risk profiles for
thefts of bicycles in a metro area- easy. 
Same for auto/motor collision frequency/severity, water leaks, loss of
use amounts, cost of chest x-rays, roof replacement costs, and burial costs in most
jurisdictions.  Really great fodder for
clever adherents of InsurTech- high frequency, low cost cover and claims.  Even more complex risks are becoming easier
to assess, underwrite and price due to the huge volume of available data
points, and the burgeoning volume of analysis tools.  I just read today that a clever group of UK-based
InsurTech folks have found success providing comprehensive risk analysis
profiles to some large insurance companies-  Cytora
that continues to build its presence.  A
firm that didn’t exist until 2014 now is seen as a market leader in risk data
analysis and whose products are helping firms who have been around for a lot
longer than 5 years (XL Catlin, QBE, and Starr Companies)  Seemingly a perfect fit of innovation and
incumbency, leveraging data for efficient operations.  InsurTech.

But ask those who work behind the scenes at the firms, ask
those who manage the claims, serve the customers, and address the many
claim-servicing challenges at the carriers- is it possible that a risk that is
analyzed and underwritten within a few minutes can be a five or more year
undertaking when a claim occurs?  Yes, of
course it is.  The lion’s share of
auto/motor claim severity is not found within the settlement of auto damage, it’s
the bodily injury/casualty part of the claim. 
Direct auto damage assessment is the province of AI; personal injury
protection and liability decisions belong in most part to human interaction.  Sure, the systems within which those actions
are taken can be made efficient, but the decisions and negotiations remain outside
of game theory and machine learning (at least for now).    There have been (and continue to be)
systems utilized by auto carriers in an attempt to make uniform more complex
casualty portions of claims ( see for example Mitchell) but lingering ‘burnt fingers’
from class action suits in the 1980’s and 1990’s make these arms’ length tools trusted
but again, in need of verification.

Property insurance is not immune from the effects of
innovation expectations; there are plenty of tools that have come to the market
in the past few years- drones, risk data aggregators/scorers, and predictive
algorithms that help assess and price risk and recovery.  That’s all good until the huge network of
repair participants become involved, and John and Mary Doe GC prices a rebuild
using their experienced and lump sum pricing tool that does not match the
carrier’s measure to the inch and 19% supporting events adapted component-based
pricing tool.  At that intersection of ideas,
the customer is left as the primary and often frustrated arbiter of the claim
resolution.  Prudent carriers then revert
to analog, human interaction resolution.  Is it possible that a $100K water loss can
explode into a $500K plus mishandled asbestos abatement nightmare?  Yes, it’s very possible.  Will a homeowner’s policy customer in Kent be
disappointed because an emergency services provider that should be available
per a system list is not, and the homeowner is left to fend for himself? The
industry must consider these not as outlier cases, but as reminders that not
all can be predicted, not all data are being considered, and as intellectual
capital exits the insurance world not all claim staff will have the requisite
experience to ensure that which was predicted is what happens.

The best data point analysis cannot fully anticipate how
businesses operate, nor how unpredictable human actions can lead to claims that
have long tails and large expense.  Consider
the recent tragedy in Paris with the fire at the Cathedral of Notre Dame.  Certainly any carriers that may be involved
with contractor coverage have the same concerns as all with the terrible loss,
but they also must have concerns that not only are there potential liability coverage
limits at risk, but unlike cover limits, there will be legal expenses
associated with the claim investigation and defense that will most probably
make the cover limits small in comparison. 
How can data analysis predict that exposure disparity, when every claim
case can be wildly unique?

It seems as underwriting and pricing are under continued
adaptation to AI and improved data analysis it is even more incumbent on companies
(and analysis ‘subcontractors’) to be cognizant of the effects of unique claims’
cycle times and ongoing costs.  In
addition, carriers must continue to work with service providers to recognize
the need for uniform innovation, or at least an agreed common denominator tech
level.

The industry surely will continue to innovate and encourage those InsurTech superheroes who are flying high, analyzing, calculating and selling faster than a speeding bullet.  New methods are critical to the long-term growth needed in the industry and the expectation that previously underserved markets will benefit from the efforts of InsurTech firms.  The innovators cannot forget that there is situational kryptonite in the market that must be anticipated and planned for, including the continuing need for analog methods and analog skills. 

image source

Patrick Kelahan is a CX, engineering & insurance professional, working with Insurers, Attorneys & Owners. He also serves the insurance and Fintech world as the ‘Insurance Elephant’.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

Subscribe by email to join the 25,000 other Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).

How does One Consume an Ocean of Data? A Meaningful Sip at a Time

So many data, so many ways to use it, ignore it, misapply it, co-opt, brag, and lament about it.  It’s the new oil as suggested not long ago by Clive Humby, data scientist, and has been written of recently by authorities such as Bernard Marr in  Forbes wherein he discusses the apt and not so apt comparison of data and oil.  Data are, or data is?  Can’t even fully agree on that application of the plural (I’m in the ‘are’ camp.)  There’s an ongoing and serious debate on who ‘owns’ data- is possession 9/10 of the law?  Not if one considers the regs of GDPR, and since few industries possess, use, leverage and monetize data more than the insurance industry forward-thinking industry players need to have a well-considered plan for working with data, for, at the end of the day it’s not having the oil, but having the refined byproduct of it, correct?

Tim Stack of technologies solutions company, Cisco, has blogged that 5 quintillion bytes of data are produced daily by IoT devices.  That’s 5,000,000,000,000,000,000 bytes of data; if each were a gallon of oil the volume would more than fill the Atlantic Ocean.  Just IoT generated bits and bytes.  Yes, we have data, we are flush with it.  One can’t drink the ocean, but must deal with it, yes?

I was fortunate to be able to broach the topic of data availability with two smart technologists who are also involved with the insurance industry, Lakshan De Silva, CTO of Intellect SEEC, and Christopher Frankland , Head of Strategic Partnerships, ReSource Pro and Founder, InsurTech 360″.  Turns out there is so much to discuss that the volume of information would more than fill this column- not by an IoT quintillions’ factor but a by a lot. 

With so much data to consider, it’s agreed between the two that
understanding the need of data usage guides the pursuit.  Machine Learning (ML) is a popular and
meaningful application of data, and “can bring with it incredible opportunity around
innovation and automation. It is however, indeed a Brave New World,” comments
Mr. Frankland.  Continuing, “Unless you
have a deep grasp or working knowledge of the industry you are targeting and a
thorough understanding of the end-to-end process, the risk and potential for hidden technical debt is real.” 

What?  Too much data, ML methods to
help, but now there’s ‘hidden technical debt’ issues?  Oil is not that complicated- extract, refine,
use.  (Of course as Bernard Marr reminds
us there are many other concerns with use of natural resources.)  Data- plug it into algorithms, get refined ML
results.  But as noted in Hidden
Technical Debt in Machine Learning Systems
, ML brings challenges of which
data users/analyzers must be aware- compounding of complex issues.  ML can’t be allowed to play without adult
supervision, else ML will stray from the yard.

From a different perspective Mr. De Silva notes that the explosion of
data (and availability of those data) is, “another example of disruption within
the insurance industry.”  Traditional methods
of data use (actuarial practices) are one form of analysis to solve risk problems,
but there is now a tradeoff of “what risk you understand upfront”, and “what
you will understand through the life of a policy.”  Those IoT (or, IoE- Internet of Everything,
per Mr. De Silva) data that accumulate in such volume can, if managed/assessed efficiently,
open up ‘pay as you go’ insurance products and fraud tool opportunities.

Another caution from Mr. De Silva- assume all data are wrong unless you prove it otherwise. This isn’t as threatening a challenge as it sounds- with the vast quantity and sourcing of data- triangulation methods can be applied to provide a tighter reliability to the data, and (somewhat counterintuitively,) with the analysis of unstructured data with structured across multiple providers and data connectors one can be helped to achieve ‘cleaner’ (reliable) data.  Intellect SEEC’s US data set alone has 10,000 connectors (most don’t even agree with each other on material risk factors) with 1,000s of elements per connector, then multiply that by up to 30-35 million companies, then by the locations per company and then directors/officer of the company. That’s just the start before one considers effects of IoE.

In other words- existing linear modeling remains meaningful, but with the instant volume of data now available through less traditional sources carriers will remain competitive only with purposeful approaches to that volume of data.  Again, understand the challenge, and use it or your competition will.

So many data, so many applications for it.  How’s a company to know how to step
next?  If not an ocean of data, it sure
is a delivery from a fire hose.  The
discussion with Messrs. De Silva and Frankland provided some insight.

Avoiding Hidden Debt and leveraging clean data is the path to a “Digital Transformation Journey”, per Mr. Frankland.  He recommends a careful alignment of “People, Process, and Technology.”  A carrier will be challenged to create an ML-based renewal process absent involvement of human capital as a buffer to unexpected outcomes being generated by AI tools.  And, ‘innovating from the customer backwards’ (the Insurance Elephant’s favorite directive)  will help lead the carrier in focusing tech efforts and data analysis on what the end customers say they need from the carrier’s products. (additional depth to this topic can be found in Mr. Frankland’s upcoming Linked In article that will take a closer look at the challenges around ML, risk and technical debt.)

In similar thinking Mr. De Silva suggests a collaboration of business facets to unlearn, relearn, and deep learn (from data up instead of user domain down), fuel ML techniques with not just data, but proven data, and employ ‘Speed of Thought’ techniques in response to the need for carriers to build products/services their customers need.  Per Mr. De Silva:

“Any company not explicitly moving to Cloud-first ML in the next 12 months and  Cloud Only ML strategy in the next two years will simply not be able to compete.”

Those are pointed but supported words- all those data, and companies need
to be able to take the crude and produce refined, actionable data for their operations
and customer products.

In terms of tackling Hidden Debt and ‘black box’ outcomes, Mr. Frankland
advises that points such as training for a digital workforce, customer journey
mapping, organization-wide definition of data strategies, and careful application
and integration of governance measures and process risk mitigation will  collectively act as an antidote to the two
unwelcome potential outcomes.

Data wrangling- doable, or not? 
Some examples in the market (and there are a lot more) suggest yes.

HazardHub

Consider the volume of hazard data available for consideration within a jurisdiction
or for a property- flood exposure, wildfire risk, distance to fire response
authorities, chance of sinkholes, blizzards, tornadoes, hurricanes, earthquakes
or hurricanes.  Huge pools of data in a
wide variety of sources.  Can those
disparate sources and data points be managed, scored and provided to property
owners, carriers, or municipalities? 
Yes, they can, per Bob
Frady
of HazardHub, provider of
comprehensive risk data for property owners. 
And as for the volume of new data engulfing the industry?  Bob suggests don’t overlook ‘old’ data- it’s
there for the analyzing.

Lucep

How about the challenge sales organizations have in dealing with customer requests coming from the myriad of access points, including voice, smart phone, computer, referral, online, walk-in, whatever?  Can those many options be dealt with on an equal basis, promptly, predictably from omnichannel data sources?  Seems a data inundation challenge, but one that can be overcome effectively per Lucep, a global technology firm founded on the premise that data sources can be leveraged equally to serve a company’s sales needs, and respond to customers’ desires to have instant service.

Shepherd Network

As for the 5 quintillion daily IOT data points- can that volume become meaningful if a focused approach is taken by the tech provider, a perspective that can serve a previously underserved customer?   Consider unique and/or older building structures or other assets that traditionally have been sources of unexpected structural, mechanical or equipment issues.  Integrate IoT sensors within those assets, and build a risk analytics and property management system that business property owners can use to reduce maintenance and downtime costs for assets of any almost any type.  UK-basedShepherd Network has found a clever way to ‘close the valve’ on IoT data, applying monitoring, ML, and communication techniques that can provide a dynamic scorecard for a firm’s assets.

In each case the subject firms see the ocean of data, understand the
customers’ needs, and apply high-level analysis methods to the data that
becomes useful and/or actionable for the firms’ customers.  They aren’t dealing with all the crude, just
the refined parts that make sense.

In discussion I learned of Petabytes,  Exabytes, Yettabytes, and Zottabytes of data.  Unfathomable volumes of data, a universe full, all useful but inaccessible without a purpose for the data.  Data use is the disruptor, as is application of data analysis tools, and awareness of what one’s customer needs.  As Bernard Marr notes- oil is not an infinite resource, but data seemingly are.  Data volume will continue to expand but prudent firms/carriers will focus on those data that will serve their customers and the respective firm’s business plans.

Image source

Patrick Kelahan is a CX, engineering & insurance professional, working with Insurers, Attorneys & Owners. He also serves the insurance and Fintech world as the ‘Insurance Elephant’.

I have no positions or commercial relationships with the companies or people mentioned. I am not receiving compensation for this post.

Subscribe by email to join the 25,000 other Fintech leaders who read our research daily to stay ahead of the curve. Check out our advisory services (how we pay for this free original research).

Food and Finance blurring through technology

As technology blurs business lines and `forces` incumbents to get rid of silos, Wealth Management & Capital Markets become broader.

Wealth Management & Capital Markets are being re-imagined as we speak.

Stay with me in this transformation.

My vision of Wealth Management is a holistic service that surely includes the future of Food and how we eat.  We didn`t touch on this topic with Paolo Sironi, when we discussed the principles of the Theory of Financial Market Transparency (FMT) in `Sustainable Banking Innovation`. Limited (and irreversible) time constraints were the reason that I didn`t raise the issue, which otherwise would be a very suitable topic to discuss with the Italian thought leader @thespironi.

My belief is that Food, Finance, and Fun are essential domains to our health and wealth. So, we will soon add to budgeting, borrowing, insurance, investing, trading, all sorts of others non-conventional `assets` and services.

Vivek Gopalakrishnan, head of brand & communications Wealth Mgt at BNP Paribas in South East Asia,  shared a Reimagine food infographic about What and How we will be eating.

reimagine food

Source: DECODING THE FUTURE OF FOOD

The way I see the broadening of wealth management with Food is through AI algorithms that we will eventually trust, as we become convinced that they know us better than we know ourselves. Once this cultural shift happens, then food AI advisory will become ubiquitous. We will entrust the mathematics, the algorithms to advise us on Diversification, risk management, and investing around food.

All this will be 100% linked to our customized insurance policies naturally. It will also affect our risk appetite in financial investing as science will put us in more control of our life expectancy and Immortality will become in. Audrey de Grey, the renowned biomedical gerontologist, wants to increase human longevity to the point that death could become a thing of the past. Medical technology could soon be able to prevent us from falling sick. Yuval Noah Harari, also talks about the `Last Days of Death` in Homo Deus.

Even if this doesn’t happen in the next 50yrs, food AI advisory will happen and the best way will be to integrate these services with the advisory of today`s conventional assets in wealth management. My US dollars, my Canadian dollars, my euros, my Swiss francs[1], and my food consumption, risk management, diversification; all in one place.

In the US, the USDA issues a monthly report on what food should cost families nationwide, presented in four different priced plans: thrifty, low cost, moderate cost, and liberal. Food costs, as a % of income, have been declining dramatically in the US (not the case in emerging markets). Whether food costs are 10% or 40% of household incomes, the point is there is a huge opportunity to manage `what and how I eat`, and just looking at the food budget which misses the entire opportunity.

My vision is that there is no distinction between PFM, robo advisors, private banking for HNW and health. Our wealth and health have to be managed in one place. Ideally, lets deploy blockchain technology to manage our data in a `personal locker` fashion and then let’s outsource the processing and the insights from this data to the best algorithms that act in our interest and advise us on what to eat, what to buy, how to diversify, how to rebalance, what risks match our goals etc. Whether it is food or money.

Tokenization can also unlock value in this context by creating communities linked by incentives built-in the tokens, that share similar food habits and or financial goals.

Blockchain can protect us from the data monopoly slavery and enable us to unlock value.

Fintech can empower us as asset owners of these new values.

[1] These are my personal actual holdings since I have lived in each of these currency places and still hold accounts. Still looking for an aggregator to view and manage all these on a single dashboard. Fintech is not done.

Efi Pylarinou is the founder of Efi Pylarinou Advisory and a Fintech/Blockchain influencer.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email.

Regtech Rising – How far are we from Robo Regulators?

Since the AI boom, there have been several stories about people losing jobs. Repetitive jobs are the ones that are most suited for robots to take over. So would there be a time when we get to tell the Regulators “You are Fired”?

Regtech had a phenomenal year 2017, with global funding reaching $1.1 Billion across 81 deals. And the first half of 2018 alone saw funding go past $1.3 Billion across 36 investment deals (KPMG Research). Thanks to an avalanche of regulations that banks had to comply with from PSD2, GDPR, MiFID2.

KPMG Research

Since the 2008 financial crisis, banks have paid $321 BILLION in fines

 CB Insights

The SEC allocated $1.78 Billion to employ 4870 who were making sure Banks were compliant. Now, with the rise of AI across the regulatory value chain, the efficiencies to be had are immense with intelligent automation. 

With an ocean of regulatory text to go through, and with several regulatory alerts to monitor on a regular basis, AI would be the way forward. I remember my Barclays days when there were several vendors claiming to make regulatory reporting easier through a workflow solution.

And why AI Can Help

When I was at PwC, we started exploring solutions like IBM Watson for regulatory and legal language parsing. Regtechs were getting more and more intelligent, and with the amount of capital that was flowing into this space, they had to. Thanks to those efforts, there are several players to proactively identify and predict risks.

As more innovation happens in this space, ease of use moves on to automation, and automation to intelligent automation. We also have started to see regulation specific solutions. Many of them existed in their simplistic form before, but they now come with better technology. Open banking has had a few focused Regtech solution providers like Railsbank. Droit provides post trade reporting for OTC transactions as per MiFID 2.

The SEC’s proposed 2017 budget is $1.78 BILLION

 CB Insights

This trend can further go up the value chain, and apart from serving banks, technology could serve regulators. Regulators have to parse through tonnes of data, use pattern recognition, NLP and predictive analytics to identify breaches proactively. Regulatory sandboxes help, and with more innovative firms looking at automating regulatory activities, Robo-regulators are not far away.


Arunkumar Krishnakumar is a Venture Capital investor at Green Shores Capital focusing on Inclusion and a podcast host.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email


Insurtech Front Page Weekly CXO Briefing – Artificial Intelligence trends

AI_Insurance

The Theme last week was P&C InsurTech trends in the industry.

The Theme this week is Artificial Intelligence trends in Insurtech. AI has always been a critical subject of not only InsurTech, but also the whole digital age. Let’s see some AI-related Insurtech news this week.

For more about the Front Page Weekly CXO Briefing, please click here.

For this week we bring you three stories illustrating the theme of Artificial Intelligence trends .

Story 1: German Insurer DFV Eyes IPO in Bid to Disrupt Allianz & Co.

Extract, read more on Bloomberg:

“With ambitions to challenge insurance giants like Allianz SE, newcomer Deutsche Familienversicherung AG needs 100 million euros ($116 million) in fresh funds to finance its expansion plan. An initial public offering is one path that Stefan Knoll, founder and chief executive officer, is considering.

DFV uses artificial intelligence to decide which insurance claims are legitimate and which are not. In partnership with Frankfurt-based startup Minds Medical GmbH, it developed an algorithm that can read so-called ICD-10-Codes, used by doctors and hospitals to categorize their bills.”

The news was from June, a recent interview on DFV founder Dr. Stefan M. Knoll was released on InsurTechnews, one of the biggest feature of DFV is that they use AI to process claims.

Editors Note: medical insurance claims has long been a hairball of complexity that causes a lot of pain for customers/patients. The most broken big market today is America, but the politics around Health Insurance are so divisive in America, that it is possible that the breakthrough will come from another market like Germany.    

Story 2: Insurers must think strategically about AI

Extract, read more on Digital Insurance:

“Much of executives’ enthusiasm is justified. AI is already being deployed in a range of arenas, from digital assistants and self-driving cars to predictive analytics software providing early detection of diseases or recommending consumer goods based on shopping habits. A recent Gartner study finds that AI will generate $1.2 trillion in business value in 2018—a striking 70 percent increase over last year. According to Gartner, the number could swell to close to $4 trillion by 2022.”

Despite the growth momentum, AI is unlikely to help insurers yield big results in the  short term. The decision on when and where to adapt AI will be a key decision senior executives have to make.

Story 3: Huge rise in insurtech patents

Extract, read more on ITIJ:

“According to analysis from global law firm Reynolds Porter Chamberlain (RPC), 2017 saw a 40-per-cent jump in the number of insurtech patents being filed worldwide. RPC found that 917 insurtech patents were filed globally last year, compared with 657 in 2016.

Telematics, artificial technology and machine learning, and P2P insurance were among the most frequent subjects of patent protection last year.”

Telematics, artificial technology and machine learning all involve a certain degree of AI. And the growth of patent numbers signifies a positive growth on InsurTech adaption.

AI application in Insurance is still premature, but Rome was not built in one day, there will be a process. And it’s good to see that insurers featuring AI have been well received by the capital markets. This can inspire more startups and insurers to adapt AI.

Image Source

Zarc Gin is an analyst for Warp Speed Fintech, a Fintech, especially InsurTech-focused Venture Capital based in China.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email.

 

 

AI algorithms – takeaways from Fintech+

The Fintech+ conference with its AI thread was unique. The morning sessions included presentations from Nvidia and Google, and use cases and learnings from Zurich Insurance. Leading into the sustainability & Fintech panel that I moderated just before lunch.

Marc Stampfli, Swiss country manager at Nvidia took us on a journey of AI Fall, Winter and into Spring. He explained neural network concepts borrowed from biology and the initial difficulties of neural network computations outperforming statistical approaches. The 1st tipping point came with increased data availability through the internet, and only then we had evidence that neural networks could outperform statistical models.

After that point, we ran into the next problem which was the lack of computing power to process all this data and multi-layer neural networks. And this is where GPU – a kind of parallel computer –  was created and first used in vector mathematics. This is the technology of Nvidia’s processor.

For me, this historical thread is another example of a solution designed for theoretical mathematics that finds a real-world application that takes us to the next level of the 4th industrial revolution. I associate it to the zero-knowledge proof in cryptography, now used in some blockchain protocols, that allows to verify & validate data without having to trade-off privacy[1].

We are living in a world in which, more or less unconsciously, we increasingly “Trust in Math”. After the GPU adoption in business, we moved to new hardware that is not only faster but also smaller in size. We basically reinventing how data rooms looked.  And this the world from Nvidia’s angle. They have facilitated the growth and new value creation, all powered by #AI tools.  The use cases in Finance are immense. Fintech solutions for:

  • Operations: automating claims processing and underwriting in insurance
  • Customer service & engagement: alerting customer for fraud, chatbots, recommendations
  • Investing/Trading: automating research, trading signals, trading recommendations
  • Risk & Security: fraud detection, credit scoring, authentication, surveillance
  • Regulatory & Compliance: AML, KYC, automating compliance monitoring and auditing.

Evidently, the biggest but fundamental problem that incumbents face in adopting any of these potential use cases, is that they first need to find ways to integrate their data and then to upgrade their data rooms to be able to handle the required computing power.

Having said that, Zurich insurance, one of the large Swiss insurers, shared with us their AI projects and research which started as early as 2015. Gero Gunkel spoke about their very successful AI applications in automating the review of medical records with the aim to arrive at a valuation. A process that entails reviewing reports ranging from 10-40 pages and that may take on average 1hour. They used AI algorithms that reduced this to 5 seconds! That is nearly real time for a business process that is Not low hanging fruit.

Zurich Insurance has also been using AI to automate the time-consuming process of collecting publicly available information towards opening accounts for large corporates. This automated web search can not only offer efficiencies but also become a new service provided to the underwriters of these types of insurances.

“Don’t look for the Swiss army knife”, said Gero Gunkel as AI may seem so promising that one can think it can take care of everything.

Dr. Christian Spindler,  IoT Lead and Data Scientist at PwC Digital Services, raised the important question on how to develop Trust in AI. This is a tricky topic as it beckons for answers around the limits of technology. For now, it is recommended to develop AI algorithms that can also provide explanations for their “Answers”.

I would say that “In Math we Trust” to develop algorithms that Answer “What & Why”.

“Improving lives through AI” is Nvidia’s motto for their Corporate Social Responsibility. See their initiatives here.

[1] Zero-knowledge proof allows a someone to re-assure a validator that they have knowledge of a certain “secret” (data) without having to reveal the secret itself. Zcash is an example of such a blockchain protocol.

Efi Pylarinou is a Fintech thought-leader, consultant and investor. 

 Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email.

The £50 Billion opportunity and how the global stage is set for Regtech

Regtech is a £50 Billion per year opportunity, and that is just in the UK. That is due to the hundreds of millions of pages in regulatory texts that firms have to deal with, to be compliant. It is critical that firms equip themselves with technology solutions that will help them navigate through the complex world of regulation.

Please note that while Regtech covers regulations across industries, I am taking the liberty of using this term loosely to refer to FS based Regtech use cases.

During my time at PwC, I was involved in evaluating AI products for their Legal and Regulatory offerings. We were looking into IBM Watson, and had some interesting conversations on sending Watson to school to learn Legal and Regulatory language (in English). The AI engine (deep learning, NLP) would then be able to provide guidelines to firms in plain English on what was needed for regulatory compliance.

UK-ART-0-18092017-900

It has been almost five years since then and we have seen various developments across the globe. Regtech has never been more relevant. US and Europe have more than 200 Regtech firms, as these two regions are clearly seen as the pioneers of financial services regulation.

‘The FCA is the most innovative regulator in the world in terms of using new technologies and the other regulators look up to them”

– Philip Treleaven

In my opinion, Europe and in particularly the UK’s FCA are world leaders in working with innovative ways of achieving regulatory compliance. Be it payments, open banking or crypto currencies, they have taken a collaborative approach in nurturing the right firms. 37% of Regtech investments across the globe happen in the UK.

But its the happenings in Asia that I find more interesting from a Regtech stand point.

Fintech India has seen massive growth with digital payments being well backed by policies and technology infrastructure. The rise of PayTM, UPI and more recently Google Tez have all helped in bringing the total transaction volume of digital payments to $50 Billion. But with growth comes greed, and regulations have to kick in. There were tens of P2P lending firms in India until the Reserve Bank of India (RBI) launched their regulatory framework for P2P lending in Q4 2017. There are now only a handful of well capitalised P2P lending platforms.

There is a lot of work to be done around automation of transaction reporting. For example, the Microfinance market in India is still largely cash based and reporting is manual. There are startups trying to disrupt this space with cloud enabled smart phone apps, that allow for real time reporting of transactions, when an agent is on the ground collecting money from a farmer. This allows for massive gains in operational efficiency, curbs corruption, but more importantly helps transaction reporting so much easier.

I see India as a market, where Regtechs can help the RBI develop a regulatory framework across Financial Services.

China’s P2P lending market is worth about $200 Billion. Recent frauds like Ezubao, where about a million investors lost $9 Billion, indicate that the market needs to have strong regulatory controls. The scam led to a collapse of the P2P lending market in China. A regulatory framework that helps bring credible players to this space, well supported by a bunch of top Regtechs will help the status quo.

Singapore is the destination for Regtechs in Asia – without a doubt. After the US and the UK, Singapore attracts the most investments into Regtech firms. The support that Monetary Authority of Singapore (MAS) provides to budding startups is the real differentiation that Singapore has over Hongkong as a Fintech hub.

MAS have recently tied up with CFTC (Commodity Futures Trading Commission) in the US to share the findings of their Sandbox initiative. Such relationships between regulators help keep regulatory frameworks aligned across jurisdictions . So, when a Fintech is looking to expand beyond borders, they don’t have to rethink operational, strategic or technology aspects for the new jurisdiction and they can focus on what matters – the consumers.

As Fintech evolves over the next few years, there are several ways in which Banks, Insurance providers, asset managers and regulators can work in partnership with Regtech firms. In some areas, these firms will piggyback off what the incumbents have or haven’t done.

There is often a rule of thumb in the top consulting firms – build propositions in an area where there is fire. In other words, if a client has a major issue that could cost them money and/or reputation, come up with a solution for that. This is particularly true with Regtech firms, where they focus on an area that has a serious lack of control and governance.

However, in many parts of the world, there is a genuine opportunity for Regtechs to go a step further and define the controls in collaboration with the regulators, and perhaps ahead of the regulators.


Arunkumar Krishnakumar is a VC investor focusing on Inclusion, a writer and a speaker.

Get fresh daily insights from an amazing team of Fintech thought leaders around the world. Ride the Fintech wave by reading us daily in your email.