Subscribe to the Liminal Newsletter
Stay updated with the latest news, data and insights from Liminal
Cameron D'Ambrosi, Managing Director at Liminal
Cameron [00:00:03] Welcome everyone to state of identity, I am your host, Cameron D’Ambrosi. Joining me this week is Simon Marchand, Chief Fraud Prevention Officer for security and biometrics at Nuance. Simon, welcome to State of identity.
Simon [00:00:17] Hi, thanks for having me.
Cameron [00:00:19] It is my pleasure. So great to connect. And you know, I think this is an area of exploration for this conversation. That’s top of mind for a lot of folks in this current moment in time. So excited to to dove into what you’re building at Nuance and some of these applications, many of which I think maybe people are not necessarily thinking about at this time. But before we do all that, we have to start the conversation with a little bit about you. I always find it so fascinating to understand, you know, folks on ramps as it were into the digital identity space where they cut their teeth in and what some of those formative early career experiences where that kind of led them into the digital identity space. So would you mind walking us through kind of a little bit of your background and what led you to this current moment where you lead the fraud team at Nuance?
Simon [00:01:11] Yeah, sure. Well, actually, I started in fraud the 12 years ago or something like that got a little job as a front manager in a small regional bank in Canada. And from there, I started learning, you know, as I was going, I started learning to to build fraud detection rules, trying to understand the patterns, trying to understand the mentality that fraudsters had. And after a couple of years and having my own team and developing a bunch of projects, I moved to Canada. So our biggest operator in Canada, also managing a fraud and security team. And there are, you know, much more focused on identity theft, identity synthetic identities. Because, you know, in the 10 years that I’ve actually spent managing fraud teams, we have seen that shift that change, that that rapid change in how fraudsters were operating. When I started, it was mostly very artisanal fraudsters making a card skimmer and selling it to the gas pump, trying to withdraw some money. And 10 years later, it was mostly mostly just identity theft, stolen identities, subscription fraud, synthetic identities, account takeovers, all the kind of issues that now are top of mind for a lot of people. And that’s what brought me to Nuance. There was an opportunity here to work on that amazing technology that allowed me to move from what I had been doing for 10 years. You know, looking at transactions, looking at the new account being opened and starting to look at who the person is, who’s that human being that’s trying to present me all of that information? And we do it with voice biometrics, you know, for the most part, but all sorts of biometric technology can be used for that. And I found it fascinating that we could bring everything one step closer to the fraudsters themselves instead of always being running after stolen information. So, so yeah, that’s what I did for the past 12 years. That’s where I am today now running the whole fraud and security team and leading to product development also for our gatekeeper platform.
Cameron [00:03:11] That’s amazing. And I can’t say I disagree with you in in any sense thinking about, you know, where this market is headed. It sounds, I guess, somewhat ridiculous to say on its face. But if we can get to functional digital identities enabled by many of these technologies that that Nuance is is bringing to market, you know, there is no such thing as fraud in many ways, if we can get to a super high level of assurance about who is behind a given transaction. You know, there are other problems that you certainly have to tackle there. But fraud in many cases fades into the background. And I think fraudsters being the opportunistic lot that they are, I think are going to take their ball and if not, go home, you know, work on attacking other things, you know, rather than trying to intercept, you know, stealing a a credit card or someone’s identity at the originator, you know, they’re just going to focus on like, Hey, let’s just rob your mailbox or, you know, hold up an armored car, things like that.
Simon [00:04:06] Yeah. Well, let’s go back to basically what they were doing before, right? And we see it sometimes, you know, from time, sometimes we’ll see fraudsters revert back to their old practices because we tend to forget that it used to be an issue. So, you know, the controls just disappear over time and then they can go back to old vulnerabilities. I don’t think fraud is going to go away, but for sure, the kind of subscription fraud issues that we see and that we have seen, especially in the past 18 months, with government programs being hit very, very severely because of the very old, the way, very old traditional way to do with education and identification of individuals. I think this is going to go away. I think we can push fraudsters to have to innovate and do something else that said they will innovate. They’re so good at that. But but it really transforms how we see identity. It’s not just about, you know, a physical piece of ID. It’s not just what you’re presenting to someone, but it’s also what’s attached to it, you know, and when you start attaching. Biometric factors and that identity becomes much more difficult to falsify or re-appropriate if you’re not like a perfect match match for that particular identity.
Cameron [00:05:13] So we could look, we could go for a full hour straight, just just diving into. Pardon my pun, the Nuances of this fraud market. But I think we’d be remiss to not give our audience a little bit more context about, you know, the Nuance platform itself. For those who might not be familiar, you know, at a 15000 foot level. Can you walk us through, you know what the Nuance platform is about and what those capabilities are that you’re bringing to bear in this biometric space?
Simon [00:05:41] Yeah, and I’ll focus on a very small subset of the Nuance portfolio, you know, obviously Nuance is known for its presence in the health care sector and in the intelligent engagement, you know, enterprise services. I focus on what’s called gatekeepers. Gatekeeper is a platform that allows us to to do a lot of different things, but the core of the platform is voice biometrics. So as you speak, we can measure a thousand different factors of your voice and create the unique voiceprint, just as I would create a unique fingerprint for you. So it’s all driven by a deep neural network, a fourth generation one that we’ve created, and it allows with two seconds of audio to recognize you to match your voice with what’s expected on file and say, Yes, this is the same voice. This is the same individual now that focuses only on the voice, on the sound of your voice. So it’s how your, you know, your lungs, your throat, your teeth, your palate, your sinuses. All of this affects a sound that makes that is unique to you. It is not tied to the words, and it’s not tied to a device is not tied to a shadow. So language agnostic, device agnostic, very transparent technology. Very quick and effective. Now, another part of the platform uses conversational biometrics, which completely disregards the sound of your voice and focuses on the words you use it. So in that case, you know, if I’m speaking in French or in English, I won’t use the same vocabulary. I’ll have the same pacing through my sentence. So all of this can be used to create that unique conversation print for me. And then the third biggest part of the tech is behavioral biometrics. So that’s when you’re online, you’re interacting with the device. So how do you move your mouse? How do you type on your keyboard if you’re holding a phone? How would you hold your phone? How do you swipe on the screen? And we create a profile for that. Now these three, let’s say, core elements of the platform are complemented with synthetic speech detection, playback detection, age identification, all sorts of other pieces of tech that help us increase our level of certainty that we’re speaking with the right person or interacting with the right person and everything that I just said is used to authenticate a real individual. But it’s also used to watch list. I know fraudster. So if a fraudster tries to interact regardless of what they’re saying, regardless of what information they’re providing, if we have their voice, we can tell with certainty that we’re speaking with a fraudster, trigger an alert, send it to a fraud agent and then we can do that analysis even before a transaction was completed with that individual. So that’s what Gatekeeper does. It’s a platform that covers every single channel on which someone can interact with an organization and helps attach a biometric factor to each customer and to each fraudster.
Cameron [00:08:16] I love that. I guess a slight point of clarification, you know, I not to toot my own horn, but you know, I speak with a lot of folks in the space, and I think I have a pretty strong understanding of how many of these technologies your technologies are deployed. The one question that I’ve had around the initiation of the deployment of like a voice biometric solution is how we get that kind of initial template, obviously, when it’s something like face. Most platforms are relying on OK, at least have a, you know, a passport, a government issued I.D. with my face on it, where the government was the one initially to say, Hey, show me your birth certificate now, show me your face will bind those together in a trusted sense. What serves as that initial kernel of trust around that voice template?
Simon [00:08:58] Yeah. And before you do that, that’s that’s not what you just mentioned, because we tend to think that because a photo is in a government ID, it’s a reliable way to match your face with a new digital identity. Truth is, there were a couple of stories, you know, through three years back, with government officials issuing new IDs to to European citizens, but with the same photo on hundreds of different IDs. You know, they were colluding, they were being bribed to do it. So so we tend to think that because it’s a face on a government I.D., it’s reliable. Truth is, it might not be. Fraudsters will always find a way to get what seems to be a legitimate ID issued with the right photo on it. The way we do it, of course, it’s not foolproof. But but we think that we have found the right set of best practices to make sure it’s it’s solid. Now, the first thing that we would do in all cases is always make sure we know who are fraudsters, are fraudsters are calling every day. We already know that they’re there. So we start by watch listing the fraudsters start, but making sure that if they call back, we will identify them. Now, when we start rolling it out to your legitimate customers, we already listen before we create in your voice print. If that person is a known fraudster, you know, is it an undesirable individual that we have heard before, which might be indicative of them trying to set up a new voiceprint for a legitimate customer? And then once we have a level of assurance that it’s not a known fraudster, every bank, every telco, every insurance company or government will have their own process. No, I can tell you that there is one process that fits all because truth is depending on the country, depending on the size of the organization it changes. Most of them will go through, you know, extensive KBA approach they might have to factor. He should also in the loop. They might decide to send a letter also, you know, to confirm that something was created. But the truth is each organization determines how they make sure to talking to the right person. And once that’s done, it’s just part of the normal conversation. So, you know, you’re calling your bank, you’re saying, Hey, I’m Simon, I’m going to because I need to change my address. Yes, sir. Let me ask you a couple security questions this, this, this and that. And then once you’re sure you say, Hey, we have this new service, you know, we can use your voice to authenticate you next time. We won’t ask all the silly questions. Can we use your voice in the future? Person will say yes most of the time, right? Because you don’t want to answer all the questions, you don’t want to be stuck in a bus censoring who your mother’s maiden name is and what’s your date of birth? Click of a button. And then the conversation, as it is, is used to create the voiceprint. So it’s very, very transparent, right? You collect consent in some way, shape or form. You make sure the person is notified that this is going to happen. But then the conversation just has to take place. It doesn’t have to be a very, you know, restrictive repeat the same sentence or repeat x, y or z words. You just talk to the agent and we just need a few seconds of that conversation to create the voiceprint.
Cameron [00:11:55] That’s fantastic. And in terms of customer response, obviously it sounds like the deployments are extremely favorable in the sense that voice, I think more so than maybe any other biometric modality, maybe save behavioral, but in some ways that has externalities caused by, you know, battery, battery drain and other computing considerations. The friction is so, so low with with voice biometrics, which I think in this day and age is the critical critical piece like, you know, companies are wary of of rising fraud rates, but they’re even more wary of anything that stands in the way of their ability to bring new customers on and satisfy their existing customers. And I think voice biometrics meets that challenge to a capital T..
Simon [00:12:42] So it is solo, right? The friction is almost unnoticeable because it happens within seconds of you talking to your to the agent. So, so of course, that drives a lot of the adoption rate. With 600 million voice prints created with Nuance technology in the world, there’s clearly an appetite for it. We can see that customers want it. But the other thing that is really powerful is the high level of security that it provides to the organization, which allows them to move a lot of very high risk operations somewhere else. So you can imagine, you know, you’re calling your bank, you want to transfer a million dollars, so you managed to find a million dollars. You want to transfer it from your account. You’re going to have to talk to someone. But what if you could just do it in the IVR? What if you could just say, Hey, I’m calling today because I want to empty my bank account? Sure. Let us do this right now, all in an automated system. Just because as you were speaking, we were able to not only understand your intent, which is, you know what? Nuance is really good at doing. Understand your intent. Make sure that we can trigger a transaction from it, but also collect your voice at the same time and do it right away knowing that we’re doing it with the right person. So it’s not just reducing the friction, it’s also enabling a lot more self-service for high risk transactions. And because you have that central voiceprint, you can reduce it across different channels. So you’re going in an app, for example, you want you want to make a high risk transfer directly from an app on your phone doesn’t matter if you just changed your phone. It doesn’t matter if the SIM card was just changed. As soon as you say, my voice confirms my identity. That’s enough for me to be able to to allow for the transaction. So it also removes a lot of the friction that other services would put on a transaction, you know, or services that will try to determine if it’s the same device services that will try to determine if there was a SIM swap or anything suspicious in that nature. If you use voice everywhere, you’re making it easier, you’re allowing for more self-service and you’re raising security and reducing friction. So it’s, you know, everyone’s winning when you move to that kind of technology.
Cameron [00:14:39] Yeah, and I think this is all part of the broader conversation that we’ve really seen evolving across the space, which is, you know, this intersectionality of fraud, the user experience and growth, obviously where you want to be up and to the right, which is lowest possible fraud, highest possible growth. And I think, you know, lowering friction while keeping that level of assurance is so, so critical to that. One of the areas we wanted to touch on here is accessibility. I think so many modalities that are more friction filled are both good at, you know, frustrating good customers, as well as potentially not allowing people to even complete the process, right? And we saw this in the news with a company in their contract with the IRS that was was recently kind of blown up because of these issues in your mind, our voice biometrics among the more accessible forms of biometric authentication in terms of, you know, the amount of people who can be brought through a process.
Simon [00:15:44] So yeah, the main advantage is it’s because it’s not device dependent. You don’t require any particular hardware to execute on voice biometrics, and that’s what’s really powerful about it. You don’t need a fingerprint reader, you don’t need a camera. You don’t, you know, even with very modern phones that use facial recognition, for example, that phone eventually will be obsolete. That camera system will be obsolete. So, so making sure that someone has access to the right hardware to secure hardware is extremely challenging, especially when you try to make sure to protect your most vulnerable populations, right? You talk about senior citizens that are the target. The fraudsters, unfortunately, quite heavily. They don’t have a fingerprint reader, they don’t have a camera. They might even have a PC, right? So you really want to look for technology that can work on an old wire line, you know, copper line phone, and that’s what biometrics allows you to do. So really, yes, accessibility is a huge, huge advantage of using it, because then even the less tech savvy customers, the ones that don’t have access to the hardware, the ones that are remote and can’t come in person and have their photo taken in front of an agent, they can still leverage biometric technology that can still be protected. They can still see their accounts and assets protected and also get access to, you know, talk about government services, for example, right? They can get access to services that would normally require an in-person authentication or identification. Now you can do it remotely as long as your voice print was tied to your identity. And I think from an accessibility perspective, this facilitates the adoption of voice. Biometrics technology in particular, just makes it so much easier for everyone.
Cameron [00:17:25] Yeah, I couldn’t agree more. Pivoting to, you know, the the other side of the equation, which is the bad actors, the threat actors that are looking to cause harm to these platforms and and make life, you know, generally more miserable for everyone. Voice and and deep fakes, I think, is an area of intense fascination for for me personally. I take it that you have developed your model with specific capabilities in mind to kind of identify and and flag the use of a deepfake technology as a threat to the integrity of the voice biometric.
Simon [00:18:02] Yes. Two things here, right? That technology we hear about a lot and you know, some even consumer facing technology is quite impressive to the human ear. It’s good. That said, it’s not that easy to synthesize a voice, right, if you do it with customer facing tech, it’s going to take you 30 minutes to read through a and text. If you try to do it by inferring someone’s voice, you know, based on speakers from different languages, nationalities, age, gender. It’ll get you a voice with very little audio, but might not be as convincing. The truth is, none of this technology is, as of today, good enough to break through our biometrics engine. It’s just not there yet, and our biometrics engine are updated quarterly, so you can imagine we’re always working on it. Now you’re right to point out that we also have algorithms that look specifically for synthetic speech, for deep fakes, for, you know, devices that could try to master, voice or alter your voice so you’re not matching the watch list on which your voice has been put and that runs in parallel to everything else. It’s part of the same transaction. It’s part of the same process of risk assessment on that particular interaction. So really, even though fraudsters might be tempted to use it and we do see articles mentioning synthetic speech, but the truth is. Every article that covers that has very little to substantiate the claim that synthetic speech was used and most likely, it is different technology but not synthetic speech. With 600 million voice prints created in the world, we haven’t had a single report of a fraudster successfully breaking into our biometric system with synthetic speech, so it gives you an idea of, you know, how far the technology is from still being a real threat. That said, and we have seen it, you know what to two weeks ago with the situation in Ukraine, where a deepfake video of prison Zelensky was created with a fake voice, which was, you know, to the human ear, not very good. You could hear it wasn’t the right person, but we still see that there are efforts by nefarious actors to start synthesizing voices and synthesizing people. So we need to be ready for it. I just don’t see it being a threat for the next two, three, even four years for the general public. I don’t think we’ll see fraudsters try to call you and synthesize your voice in the next couple of years just because the technology is there to prevent them from succeeding at using a synthesized voice. Doesn’t mean we shouldn’t be ready, though, and that’s why we have 300 people working in research and development, making sure that every new piece of tech, every new white paper. Every new technology that we should consider is considered and that we get ready for it and ready to detect if such a tech is used by fraudsters.
Cameron [00:20:48] Are there other threat vectors that you guys are seeing? Obviously a, you know, certainly don’t want you to, you know, be tipping off the bad actors who may or may not be be listening to the podcast. You know, not aware of a large fraudster contingent in our listener base, but if you’re out there, we’ll see you guys on the battlefield. But, you know, obviously you guys need to stay abreast of the latest and greatest. I think deepfakes are exciting in the sense that it’s like a sensational technology that I think freaks a lot of people out. But are you seeing other methods or attempts at at bypassing whether it’s replay attacks or other methodologies for for kind of spoofing someone’s voice?
Simon [00:21:27] So we see replay attacks being used, but they fail. But that’s usually the first thing that someone will use that fraudster will use against an organization that just deployed voice biometrics. What we tend to see is fraudsters trying to look for other channels, you know, other means to get to to to execute their fraud, so they’re trying to work around the biometric system. The truth is it gets progressively more and more difficult, you know, as an organization deploys it than the first biggest channel, then they’ll expand and expand and expand and expand. So so what we see is a lot of attempts at corrupting and bribing agents, you know, trying to get them in, pay them for their collaboration, hoping that they themselves can work around the biometric system of the security systems. But you can imagine how complex that gets, and most of the time, most of it. The vast majority of deployments which we start seeing is those fraudsters that we identified. They just go away. They move to the competition because it’s it’s a business for a fraudster, right? They still want to make money. They’re doing this 9:00 to 5:00. They’re doing your cost-benefit analysis if they need to start recruiting a bunch of mules and a bunch of collaborators that will execute on their behalf and follow a script. Well, not only will we pick up on scripts being used or coaching, but you’re also taking a risk as a fraudster, you’re taking more people, you know, in your secret circle, you’re sharing information with a lot more people, you’re exposing yourself. So what we tend to see as fraudsters, just finding another organization that isn’t doing this yet. You know, maybe they’re still relying on either just cable or cable paired with an SMS, one time password so they know they can break through that. And that’s usually what we see happening. Fraudsters are not ready yet to change significantly how they operate. They just changed where they operate, when they’re faced with biometrics.
Cameron [00:23:21] I love it. So what’s next? You know, I think you are are obviously at the forefront of what buyers in the space are looking for. You know, do we expect to continue to see this focus on, you know, growth and passing good customers through as much as fraud? What technologies do you expect to continue to be on folks radars and and in general? Like, where do we see this broader space headed, in your opinion?
Simon [00:23:49] I think the next biggest step will be, you know, a radical change in national identities. So, so how do we make sure that a person coming to us for the first time asking for an account to be opened remotely? How can we check that voice against a voice that would be a central repository, the official voice of that individual? And I think that transformation of national identities is really what we’re looking for and looking out for the next couple of years, because ultimately that also enables us, you know, citizens to limit the information we will be sharing with a lot of organizations. And I think that attribute sharing tied to a biometrics digital identity that will be the next thing that we see. So for example, just to illustrate what I mean here, when you go to a bar and you’re asked to provide your driver’s license, you’re doing so because a person needs to validate your age. Basically, they just need to know that you’re allowed to get in the bar. But when you provide your driver’s license, you’re providing your photo, your address, you’re providing your date of birth, you’re providing ID. All of this does not need to be shared. If I was able to open a digital identity wallet with my voice, which would certify that this digital identity wallet belongs to me, all I have to show is a thumbs up that says can get in. You don’t need to know my age. You don’t need to know my date of birth. You don’t need to know who I am. All you need to know is that the digital identity wallet I’m opening belongs to me because I provided biometrics information to open it and sharing that one particular attribute. And we see it for everything, right? You’re making a purchase online for things that are age restricted, you’re trying to access content. All of this can be tied to attribute sharing, you know, locked behind your your biometrics print. All right. I think this is going to be transformational in how we handle our own privacy and how we limit what we share with a lot of organizations. You know, so many organizations out there are still collecting way too much information for for purposes that make no sense. I think that’s what that’s what we’re looking at. We’re looking at a huge transformation and COVID has really accelerated how governments are looking at those projects of, you know, national digital identities. And we’re seeing more and more governments, you know, either federal or state level looking at adding biometrics to enrich that information. And I think in the next couple of years, you know, it’ll transform how we interact with organizations and governments
Cameron [00:26:19] couldn’t agree more. I think you’re you’re spot on. And look, we’re at such an interesting inflection point, and I’m hopeful that we can kind of seize this moment and and really make some hay of it because, you know, we can’t carry on like we are now. I think the the negative externalities that are being created from our kind of fragmented and broken for lack of a better word, digital identity systems as they stand now are just not going to carry us through to the next decade successfully. So really hopeful that this is going to be, you know, looked back on. Obviously, you know, nothing good has come out of COVID in the sense of how destructive and and horrific it’s been for the globe in general and the human toll it’s taken. But I think as a call to action around, you know, digital identity and really pushing past where we are now and realizing where we need to be, it has been quite illuminating. You know, removing the option for that in-person channel, I think, really opened a lot of eyes in the C-suite folks who previously we couldn’t maybe get a conversation going with about digital identity are now fully bought in. And I’m hopeful that that is going to lead to to some major breakthroughs.
Simon [00:27:38] Oh, couldn’t agree more. And definitely we start. We’re starting to see it. We see those C-suite, you know, individuals. Launched a conversation, you know, ask questions about it, they’re not passive in that in that regard. And I think that that’s a huge shift in how a is that is considered in a lot of organizations.
Cameron [00:27:59] Before we wrap here, for listeners who are intrigued about the capabilities of the Nuance platform, want to get in touch with you or your team? What’s the best place for them to go?
Simon [00:28:10] So best place, you know, Nuance dot com slash fraud, you’ll have tons of information, the latest ebooks, white papers, even some images of what the platform looks like. But if you want to talk about fraud, you can reach out to me at LinkedIn or just look me up so much and always happy to to chat and get on a phone call if needed. So, yeah, feel free to reach out.
Cameron [00:28:31] Amazing. Simon, thank you so much to our listeners. You know, if you reach out to Simon, please be nice. Don’t reflect poorly on me. And thank you again. So much for your time.
Onfido CEO Mike Tuchen shares his insights on the digital identity space, and the challenges businesses and consumers face. Tuchen discusses the need for a privacy-first approach, the growing demand for reusable digital identities, and the shift towards user control of personal information.
Secfense Chief Technology Officer, Marcin Szary, joins host Cameron D’Ambrosi to explore the current authentication landscape. They discuss why FIDO Alliance has been a truly transformative moment for the death of the password, how Secfense sets itself apart in a crowded and competitive landscape, and Marcin’s predictions for the future.
Measuring the reach of digital advertising and smartphone app performance is a difficult task made more challenging by tightening data privacy regulations. Edik Mitelman, SVP & GM of Privacy Cloud at AppsFlyer joins host Cameron D’Ambrosi to discuss the current state of the consumer data landscape, how platforms must balance first- and third-party data usage, and why the death of cookies is a tremendous opportunity.
John Bambenek, Principal Threat Hunter at Netenrich, joins host Cameron D’Ambrosi for a deep dive into the current trends across the cybersecurity landscape, from ChatGPT and deepfake offensive threats to leveraging data analytics across your XDR, SIEM and SOAR technology stacks for improved defenses.
Vyacheslav Zholudev, Chief Technology Officer of Sumsub, discusses the current state of the identity verification market with podcast host Cameron D’Ambrosi. They explore the factors driving platforms to move beyond basic identity verification and into other aspects of the digital identity lifecycle. They also discuss the challenges of implementing artificial intelligence in regulated use cases such as anti-money laundering (AML) transaction monitoring.
Host Cameron D’Ambrosi is joined by guest Marcus Bartram, General Partner and founding team member at Telstra Ventures, to dive into his company’s digital identity investment thesis, its transition from corporate VC to an independent fund, Strata Identity’s right to win, and the expanding role of identity in the cybersecurity landscape.