Subscribe to the Liminal Newsletter
Stay updated with the latest news, data and insights from Liminal
825 Third Avenue, Suite 1700, New York, NY 10022
Copyright © 2023 Liminal Strategy Inc. All rights reserved
Join Yoti’s CEO and Chief Policy and Regulatory Officer as they unveil the remarkable journey of Yoti, transforming from a pioneering force in reusable digital identity to becoming a prominent frontrunner in the rapidly expanding realm of biometric age estimation. Discover the intricacies of this cutting-edge technology and its ever-increasing significance, especially with the emergence of new regulations governing age-appropriate design and online access to restricted goods and content. Explore how one company sets new standards for a secure, age-conscious digital landscape.
Cameron D'Ambrosi, Senior Principal at Liminal
yoti.mp3
Cameron D’Ambrosi [00:00:00] Welcome to State of Identity. I’m your host, Cameron D’Ambrosi. Joining me this week are Robin Tombs, co-founder and CEO and Julie Dawson, Chief Policy, and Regulatory officer at Yoti. Robin and Julie, welcome back to State of Identity.
Robin Tombs [00:00:14] Nice to see you, Cameron.
Cameron D’Ambrosi [00:00:17] Well, so glad to have you both here. Yoti is featured on Liminal’s 2023 Companies to Watch list, so congratulations there! But this is a journey that has been going on for a number of years now. I’ll let you correct me on the exact timeline, but you’ve had the pleasure of joining us on State of Identity previously. But I know there’s probably a decent whack of our audience that maybe hasn’t heard those episodes. And I think the depth and breadth of what you’re doing with the Yoti platform really sits at the intersection of a number of trends in the digital identity space. So maybe as an entry point to the conversation, would you mind taking us through kind of the founding of Yoti and what that problem statement you identified out in the market was that needed solving and the birth of Yoti as a platform?
Robin Tombs [00:01:08] Sure, I’ll do that, Cameron. So I guess two elements to this. Back as long ago as 2002, my co-founder, Noel Hayden and I had an online bingo business, and it wasn’t long after people started registering for that business that we got the first fraudsters who came in with somebody’s name, date of birth and address and somebody else’s credit card. We would do a KYC check to a credit reference agency, which would say it’s a match. We validate the person, and we then found that they were actually a fraudster who just knew somebody else’s date of birth, address and credit card. So ever since 2002, Noel and I have thought that identity thing online is broken and it needs to at some point be fixed. But it’s probably a hard thing to fix. And for many years we were busy doing all bingo and therefore we thought somebody else would fix it. We got to 2014 and the second kind of spark came to us. We basically went to a Spartan race in America, and lots of people had registered online, including myself. And then I got into a big issue and I queued and then I had to prove with my I.D. who I was, and then I had to go to another place to drop my bag and I had to queue there. And the whole thing was very much kind of, Hey, you’ve got to keep proving your I.D., because we need to know. But you definitely signed your waiver, done your back drop, and you are you claim to have been on the ticket. And I just thought, hang on, that is a massive ball lake and we’ve got to do online I.D. better than this several ways. People are going to be queuing for ages. How do you solve this online? So that’s kind of a backdrop to why we decided to set up Yoti back in 2014 rather fondly and naively. We thought in a five years we’ll have this reusable digital identity sorted and everybody will be getting one, that the world will be a much better place and fraudsters will have a big, big problem. It’s a much longer, bigger challenge to do identity really well. But we are seeing promising signs with reusable identity. So we’re an unfinished project, but it’s going well.
Cameron D’Ambrosi [00:03:30] Amazing.
Julie Dawson [00:03:34] Robin has described the genesis of the reusable identity, a lot of people will understand better the transactional identity verification, which obviously we’ve also deployed in these intervening years. And then I think probably a big area that you’ve you’ve picked up on, and you picked up on in your recent survey was on age assurance. Less known to people is that we also work with e-signatures, also augmented with identity. And then our wider work working with facial authentication and liveness, those sorts of areas. So as you said at the beginning, a wide platform of solutions in the identity and age space.
Cameron D’Ambrosi [00:04:11] And I think that’s what’s been fascinating. You know, for me as I’ve watched the evolution of the company, I think, you know, to those outside of the digital identity space, they might see these capabilities and think, you know, boy, these guys are a bit all over the map. But when you look at the business through the lens of we have this vacuum of usable and trustworthy attributes that can be fed into the digital ecosystem regardless of use case, I think it makes a lot more sense. And to the layperson and consumer education, I think remains one of the biggest barriers that we have in the digital identity space. They might be looking at this product stack and say, how do all of these things fit together? But I think, trust and data privacy are really at the foundation there. From that perspective, you know, how have you thought about building compliance data, privacy, trust, user centricity into the platform at a foundational level? Because I know that’s maybe the biggest barrier to consumer adoption is this, you know, I don’t want to say nebulous but kind of vague concerns around where is my data going and will I have control of this.
Julie Dawson [00:05:35] For me, I think right at the beginning, Cameron, we had a set of seven principles and, you know, very effectively were principles, but we would always have the interests of our users, of individuals that we would attempt to minimize data. And, you know, several other things, which actually meant that when we came to build a reusable digital identity and to prove ID and to prove age, we had to think about what is actually the best way to do this. From a privacy point of view, data minimization. We don’t want to be surveying people and how do we do it to make sure that people really do trust this precious digital identity is within their control and that we’re effectively not selling them and their data down river. And that actually really helped because we obviously initially focused on the reusable digital identity. But even when we came to do the world’s leading age estimation, we were thinking immediately about how do we do this in a privacy preserving way. Julie, you can probably explain exactly what that meant for that product.
Julie Dawson [00:06:47] Absolutely. So, I mean, we’ve really always looked at this several levels. How does the everyday person, how do civil society, how do the businesses we’re working with and also governments, how do they trust what we’re doing? So we’ve looked at that again on several different layers. You know, what are all the different accreditations around data responsibility, around the cybersecurity that an organization like ours needs to look. Be that the SOC2’s in the security world, look at HIPAA for example in health through to obviously all the different rules around data protection around the world and more recently on age appropriate design. we’ve also spent a lot of time looking at the ethical framework of our business. So we’re a founding UK B Corporation that looks at that whole triple bottom line. We’ve got an extensive governance framework, so we’ve given ourselves a hard task in that we have an internal group of people looking at ethics and we also have an external audit with the terms of reference to publish openly. And that is something that really makes us very thoughtful regarding the human rights issues, the consumer rights issues, the last-mile tech of accessibility and all the online harms. So we could spend the whole hour just on that. And I think we’ve got to move on to some other areas. But you’re absolutely right, Cameron, that trust element, it has been from the get go, something that our team has looked at and we look at every day trying to before we launch products on an ongoing basis, look through those lenses, which as we’ve heard in recent months with the AI developments, is something that sadly too many people look at afterwards. And we all have to look at it continually.
Cameron D’Ambrosi [00:08:34] Congratulations on taking that approach. I think, you know, transparency in many ways is kind of the only path forward for proving out some of these notions of privacy because, you know, you can’t disprove a negative. To some degree, right. So the best way that you can evidence to your customers, to your user base that you’re respecting their privacy is to give them as much information as possible on which to base those decisions. And I think about age estimation in particular, because of that touch point with, you know, more young users than other solutions, you know, certainly if you’re doing age gating for something like an alcohol purchase, you’re going to have a decent amount of children, you know, probably maybe trying to get past the system to some degree, but that’s not the primary user base. Whereas with age estimation, where there are decisions where, you know, a user is allowed to access a platform, but it’s about what can they see and what level of data privacy do they have. Obviously, you’ll have many more child users coming through that age assurance process. So I guess this is a roundabout way of saying age estimation. Top of mind for, you know, every tech platform globally right now. We’ve seen a slew of new regulations from the EU to California to U.S. states like Utah and Arkansas. I know the U.K. has some initiatives as well. How did you come about recognizing this market opportunity? And what was the impetus for when you realized that you thought you had a good shot at leveraging the biometrics capabilities that you had developed to put them towards, you know, an age assurance solution?
Julie Dawson [00:10:20] Okay. So I’d say it’s really good to wind back several years. We’ve already described that yoti had its app and through that, well, now we’ve got over 13 million people that have set up a reusable digital identity app. People from over 190 countries worldwide can onboard 3 to 5 minute process. And when they go through that at the point of onboarding or subsequently, they can opt out of the sort of data elements. And what we’ve done through that is, is think how could that data be useful? We already could enable those people on the planet with a document that’s set up to reuse one app to share on a selective measure, just one over 18 or just one under 18. So for instance, in the UK with the NSPCC Childline, a child could share just an under 18 to apply to have a sexting or a nude image removed from online. And in the 18 plus area, we could enable somebody just to share that over 18. But we thought, what about those people that either don’t own a document, can’t access it or honestly feel a bit uncomfortable using it in that area or just want something less at that particular moment in time. Initially, that inclusion angle, we thought, how could we use this data? And we had a very unusual ground truth, very diverse across many countries worldwide of faces with just month and year. And through that we were able to build this age estimation algorithm. We worked obviously with civil society ahead of time, the likes of CDT and in the US, the likes of World Privacy Forum. We held workshops on this. We worked with the ICO and others, but we got very good feedback that actually there was something very useful in terms of an approach that was very privacy preserving. You would only be sharing just a data minimized element element. So the algorithm is trained with lots of images. When it suits a new image, it does a pixel level analysis of that new image it detects is that a live face and then gives the result back to the platform on a software as a service basis. So the tech space does the analysis, gives the result, and instantly deletes the image. There’s crucially no recognition because it hasn’t been trained with any names or addresses. It doesn’t know that that’s a Cameroon or that’s a Robin or Julie. It just knows this is a new face. It’s a live face. What can I see in those pixels? So in the pixels, it’s just patterns of ones and zeros. It works out that, you know, wrinkles on the forehead. That’s probably not a ten year old. Wrinkles on the forehead is going to be a different sort of age demographic. And it has gotten very, very accurate over time. But crucially, back to your point of transparency, what we’ve done is to publish exactly how accurate it is. And you can see from all white papers that we’ve published over the last several years how accurate that has become. We have it independently assessed. We have the fact that we delete the image instantly, also independently assessed, the bias independently assessed, and that gives great comfort to the platforms we’re working with around the world in many different sectors. So social media, gaming, gambling, adult content, dating, even retail and e-commerce. Many sectors are using this and I think the thing that gives them comfort is one, the fact that we’ve done all of this prior work on the privacy preserving nature. It’s not recognizing anyone. So, yes, it is a facial technology, but it’s not a facial recognition, either 1 to 1 or one to many, because it’s not unique. There’s no unique recognition or authentication, and it doesn’t class as a biometric under, for instance, the GDPR. It’s literally just doing that detection and analysis. So if anybody wanted to consider that more, our appointment to the Future Privacy Forum, an infographic on that, which gives a really good data image and perhaps we might include that in the materials Cameron In case people want to look at that more closely.
Robin Tombs [00:14:25] And Cameron, just one other point, a classic start up issue. But when we set up Yoti, we always thought that to protect people, we would want to use some of the data with consent effectively. People put in their faces of their passports and their driving licenses, and we would effectively be able to develop good liveness technology and good face matching 1 to 1 recognition. And we have done that. But what we didn’t realize, and it’s a classic kind of start up pivot, is we began to realize and our R&D team said, Hey, you’re getting lots and lots of ground truth of months and years of a birth. You’re getting many faces of people from around the world. We should be able to train up really good age estimation, and that was back in 2018. And I said to them, Go for it and let’s see whether that works. And of course it worked really, really well. And now a whole industry is forming around privacy, preserving facial estimation.
Cameron D’Ambrosi [00:15:35] Certainly congratulations on that very prescient call out to let that team, you know, run with that hunch that they had. You know, in looking at the future state of biometric age estimation. I think this educational piece remains a critical component, both of users as well as of regulators. We’ve certainly seen the rollout of these new age mandates in the US. And legislators, you know, they’re not technological experts. In many cases, they are unaware of the technologies that can and should be used to solve these challenges. And I think we’re about to see some really interesting rulemaking processes go through in California, Utah, and Arkansas regarding, you know, exactly what technologies and standards will be acceptable to those state regulators. And then on the flip side, you have the education of consumers to convince them that this technology is trustworthy, which, you know, if I can take a personal aside here, I’ve always found somewhat humorous. You know, this is similar to the vein of this, you know, TikTok regulatory kerfuffle here in the US about, you know, the potential harvesting and capture of facial biometrics by China, for example. I’ve always thought it’s somewhat humorous that people are concerned about their facial biometrics being stolen when it’s a platform that’s explicitly for publishing a video, an image of your face to anyone on the Internet, which means anybody, you know, myself as a private actor can just go to your TikTok profile, download the video and capture your facial biometrics there. And the notion that, you know, asking for a facial scan to do a biometric age estimation is a privacy concern. When once you get on the platform, you’re just going to use it to publish pictures of your face broadly. Maybe some disconnect there. But, you know, humans maybe not known for our rationality as a species. How are you handling this critical consumer education piece around, you know, specifically what these terms mean? You know, even the notion of the fact that you’re doing estimation but not templating, you know, obviously is someone with a decent amount of expertise in the space. That all makes immediate sense to me. But I know you know, trying to explain that to my grandmother might be a bit more of a challenge, you know? What techniques and strategies have you used to drive, you know, this rising tide of consumer education that I think will be critical to getting people comfortable with these systems.
Julie Dawson [00:18:12] So there was a really good piece of work that we were fortunate to do quite early on. Cameron So I don’t know if you heard about the Information Commissioner’s office in the U.K. sandbox, which was probably back about 2021 now. So we were fortunate to be part of that sandbox. Which was looking at what technologies could help with the UK age appropriate design code ahead of it coming into force. And we did two things within that. So one, we work with them to extend our facially estimation, which the time worked just from 13 upwards to six upwards. And the second big part was exactly this, the education piece. It was looking at how could you explain this to a child as well as to a parent, to an educator, someone in civil society, and also to regulators, and was quite a sort of a humorous by product story of this camera, was that the materials that we developed for the 8 to 10 year olds we actually found with the most useful for everybody, which actually is probably, you know, absolutely common sense to anyone that that knows what the reading ages of an average broadsheet newspaper. But it was really helpful. And also, for example, with the photographs, we work to look at all of the text within that to see how that was applicable for the age appropriate design, because that is available from 13 and upwards. So for your 13 to 17 year olds, we looked at how is the language in that? And we were one of the first companies to go through a voluntary order it on our Unity app with the UK Information Commissioner’s Office. So we’ve been working on this whole consumer education element for a long time. We worked with STEM science, technology, engineering and maths, STEM learning resources like the Good Code Club, Coder, Dojo Teens and I only we worked with Educational Bodies, a great group down in South Africa called Be In Touch that were doing work in schools. And through all of this, we developed a set of materials that were really plain language, plain English, but also frequently translated into other languages. Then subsequently, when we work with our clients or customers, they can adapt their materials as well. It’s not a finished symphony, you know, we keep on working at it, but we did great things like working with kids to see how kids would peer explain this to pink, to kids of their own age. And all of those materials are ones that we keep on investing in. So simple videos, 90 seconds to a few minutes that put these terms over in clear, clear steps. And we’ve also done round tables. So we’ve done about four. We’ve got another one of those upcoming where we go through all of the nuts and bolts in all white papers. We’ve checked that the language there is also straightforward and isn’t trying to blind people with science. And we try and explain all of the elements underneath it straightforwardly, and we invite scrutiny through these roundtables to keep on improving them.
Robin Tombs [00:21:12] So the thing I think the other thing, Cameron, is that we’ve we’ve had to do a lot of education of people who are regulators, legislators, people who are in the privacy area. And that’s that’s been a long process because many of them have very good, sensible concerns. Lots of questions. Some of them kind of, you know, really quite skeptical that science could allow a machine to estimate age better than a human. And they have to get their heads around that, even if it’s all independently certified and tested. But the other side of the coin is I’ve been to a lot of supermarkets over those three years, and I look at the supermarkets who test our age estimation and they obviously have trustworthy people who are in the self-checkout areas and the people who try the technology, they may initially think, Oh, you know, this seems a sensible thing to do, but should I touch of a button? But when you have somebody there with, you can help you through it and explain, but it’s obviously deleted. Even if that’s also on the screen, it’s very quick for people to kind of go, okay, I’ll give it a go and you don’t have to do anything. You just basically look into the camera and it just made you at green lights and you see people of literally the age of 25, 46, even people older, but quite happy to do it because it’s a faster experience. And online, we have lots of people who already, you know, between 65 and 90% of people where they get given a choice of three or fou methods. And if age estimation is one of them, 65 to 90% would choose it because it’s just so much easier. They don’t have to do anything. They don’t have to go and get their document. So although there’s an education piece to do for some for many other people on the Internet, it’s okay. I just need to look into the camera and the image is deleted. That’s easiest way for me to get through this and prove my age.
Julie Dawson [00:23:19] I think another thing that we’ve been very fortunate to do is work with the German regulators, and it’s a bit of a sort of unnerving fact out of Europe probably that the. Germans have been looking at each assurance for about a decade. There’s over 100 approaches that have been reviewed by the German body, the KJM, and a parallel body to them, the FSN. So we were very fortunate to go through a very in-depth process with them, with they looked at all approaches and reviewed them and we received a seal of approval and I think quite a lot of other regulars, regulators look at the UK, Israel, France and others have got their heads around this, have looked at those materials and understood how a facial estimation works. We hope, and perhaps your podcast might help with that. Cameron that some of the states in the in the US might also have that curiosity because it’s really key for inclusion. It’s key because we see that that’s what the public is looking at. As Robin said, with those stats, when they go to choice is low friction, the image is instantly deleted. And yes, it is important that that is reflected in what regulators require and more companies look for that. The way the data set has been built has to be in accordance with GDPR local privacy regulations. The image should be instantly deleted. So we are participating in the standard schemes that are developing on this, the IEEE, the ISO, and we have these elements audited and we are encouraging regulators and civil society to ask for this and to look under the hood as other techniques come along to set a benchmark of quality.
Cameron D’Ambrosi [00:24:56] That’s fantastic. Well, I certainly plan on loading this podcast onto a bunch of USB thumb drives and just sprinkling those around the Utah State House, and we’ll see who picks one up and puts it in their laptop. You know, I think certainly Utah is going to be a very interesting bellwether as far as the United States regulatory regime. Thankfully, the bill as written and the regulator that they have assigned at the state level has signaled that they’re open to dialog with industry and with solutions providers. So I’m hopeful that this will give us, you know, as an identity industry, the opportunity to really share as much as possible with those regulators and inform them about the state of the art and what is feasible and what can be done in a privacy preserving way, and that we don’t end up with, you know, a mandate that you need to scan your driver’s license to open up an Instagram account, because I don’t think that’s going to suit the needs of anybody. I don’t think consumers will like that. I don’t think the tech platforms will like that. And ultimately, I think it will be a disappointment for regulators as well. And I think a very real risk is if you put the friction too high. You know, there are ways that you’ll be able to bypass this. You’ll just have 12 year olds all downloading VPNs to make their IP address look like it’s not in Utah. And I think we’ll all be worse, worse off for that. Right. I mean, I think there is certainly tremendous value in these types of mandates. And I do think that children do need to be protected from online harms. But we really need to figure out what is the best way to implement these solutions so that they actually get used and aren’t just, you know, a metal detector, but there’s no wall around it so that if you want to bypass it, you just, you know, walk on past.
Julie Dawson [00:26:42] Absolutely. I would agree with that.
Cameron D’Ambrosi [00:26:45] So from that perspective, I think standards bodies, in general, have maybe not jumped into this game in terms of evaluating and helping to score and attest to the quality of age, estimation, technologies. Do you expect future opportunities from bodies like NIST to get some sort of independent evaluation and testing of the quality of your algorithm?
Julie Dawson [00:27:14] We think that’s highly likely in the coming period. So NIST a tremendous body that got great resources, they’re world respected in this area. So let’s watch this space. But I think it’s it’s definitely one of the areas to watch. And it would be a great service both to industry as in age industry, but also to all these different sectors. We’ve talked about adult gaming, gambling, e-commerce, retail, to have that sort of independent benchmarking. So fingers crossed that that will come in the next period.
Cameron D’Ambrosi [00:27:46] I think I’m right there with you. You know, independent standards are a really important piece of any ecosystem. And, you know, trust but verify, I think, is a phrase for a reason and hopeful that we’ll see that push to speed the deployment of these technologies hopefully. Because, again, you know, apologies for preaching to the converted, but it’s a personal point of frustration to me that what in my feeling are technologies that are inherently more privacy preserving than the status quo. You know, our. Concerns are being raised around privacy, but I don’t think we need to let the perfect be the enemy of the good in many regards. From my perspective, I would much rather share an image of my face that will be instantly deleted instead of using my Social Security number or knowledge based authentication, or even a scan of my driver’s license when it comes to proving my age. You know, the example that I always use, the example I always say to people when I’m trying to explain verifiable credentials, and anonymized attestations in general, is the use case of going to a bar and the fact that most people to this day still don’t realize when you go into a bar nightclub and you hand the bouncer your ID. And oftentimes now, you know, in New York, and I believe certainly in London, they’re putting it into some sort of device that authenticates your document to ensure it’s not counterfeit. That device is also doing a full scan of the entire face of your identity document. So I intend to prove to that bar that I’m over the legal age of 21 to enter that establishment and consume alcohol. What I’ve really given them is my full name, my complete date of birth, my address, my height, my weight, my eye color, whether or not I use corrective lenses. Many other states have additional health. It could be that I have epilepsy and I’m banned from driving all sorts of other very deeply sensitive personal information. And so, you know, I think, again, this education piece is really going to be critical. And I think a lot of that, you know, not counter to what we’ve talked about, but the onus is is certainly on platforms like yoti, too, to prove out the benefits of this technology. But I also think raising awareness of what is the status quo and what are the, you know, ridiculous, in my opinion, sharing of personal attributes for use cases that are completely unrelated to them, what that current status is and why nearly any solution that’s using age estimation, I think is is far superior from a privacy perspective.
Julie Dawson [00:30:24] I think I would agree definitely with that. But I would counter and say the choice element is really important. So one of the reasons that companies like all of us have a range of services through our sort of age portal that customers can integrate is to offer the general public a range of options. And yes, we’re seeing the vast majority of selecting the age estimation. And that could be because it takes just about a second. It could be because they don’t want to get off the chair. Could be a range of reasons. Maybe they just don’t feel comfortable for that use case doing something else. But we still think it’s really important that that is choice. So, you know, we have a range of about seven or so other methods according to which part of the world a service is being offered. Some global platforms might need to offer age assurance methods in over a hundred countries. And so in some of those, a mobile phone check might work, and others might be a bank option. So we keep always looking and curating more methods for that fallback number two or three method, and I think that is important. So yes, on the 80/20, absolutely. We think this is at the moment the way people are going. Obviously, we’re always keeping an eye on what are those new methods that a chunk of consumers in certain geographies might want.
Cameron D’Ambrosi [00:31:43] I love that while we are just about at time here, but before we wrap, wanted to give you an opportunity for what I like to call shameless plug. So for our listeners who are realizing that they are in dire need of whether it’s, you know, access to the ID platform that you’re spinning up or age estimation, what’s the best place for them to learn more about the platform and get in touch with you and the team?
Robin Tombs [00:32:09] Yes. Yoti.com is the place to go, Y-O-T-I dot com. And you know anybody is more than welcome to email me at Robin dot Tombs at Yoti dot com. We do proving of I.D. We do proving of age. We let you eSign and we also do authentication. And we’re really keen to talk to anybody who needs to offer that to their customers.
Julie Dawson [00:32:36] And send any policy questions my way. That’s Julie Dot Dawson Dotcom and a huge thanks Cameron it’s you know we’ve been delighted with the support from Cameron for the industry over this period and hats off to you for your work in this area.
Cameron D’Ambrosi [00:32:52] Well, thank you so much. You know, it takes a village, as they say. So excited to play my small role in hopefully illuminating some of these key issues and challenges and hopefully winning some converts along the way.
Robin Tombs [00:33:07] it’s a marathon, not a sprint. But there’s no doubt the race is warming up. There are a lot of good people who are creating and innovating in this space, and the regulators are going to be spoiled for choice on how people can prove their I.D., prove their age and solve some of these problems, which have been really tricky on the Internet for a lot for the first 24, 30 years of its life.
Cameron D’Ambrosi [00:33:32] Amazing. Thank you both for your time. Greatly appreciate it. And I look forward to catching up with you again soon.
Robin Tombs [00:33:37] Excellent. Thanks, Cameron!
In this episode of the State of Identity podcast, host Cameron D’Ambrosi talks with Eric Olden, the co-founder and CEO of Strata Identity. Join us as they discuss the challenges faced by today’s multi-vendor/multi-cloud enterprise technology landscape and how forward-looking executives view identity as an opportunity, not a cost center. They also delve into the importance of moving towards passwordless authentication and the role of identity orchestration in addressing these challenges.
In this episode of the State of Identity podcast, Liminal host Cameron D’Ambrosi and Justin McCarthy, the co-founder and CTO of StrongDM explore the dynamic landscape of digital identity and access management, addressing the challenges and trends that shape the industry. They talk about what it means to move towards a “credential-less” world and discuss the complexities of authentication, authorization, and the role of proxies in bridging old and new technologies. McCarthy highlights the imperative for convergence among various tools, including the essential role of AI, providing a unified approach to access control, governance, and policy enforcement.
Join Liminal in this podcast episode as we delve into the evolving landscape of fraud prevention and identity security. Our guest, Amelia Algren, Executive Vice President of Strategy and Operations at BioCatch, sheds light on how the intersection of behavioral biometrics and industry collaboration is shaping a new era of protection against scams and cyber threats. Discover how generative AI and deepfakes alter the game for fraudsters and understand the impending increase in fraud liability for financial institutions. Explore innovative biometric technology that captures subtle cues in user behavior to identify fraudsters and safeguard digital transactions. Learn how it’s paving the way for a safer digital world – from detecting account takeovers to uncovering advanced impersonation scams. Tune in to gain insights into the strategies revolutionizing the fight against fraud.
Join us as Trinsic’s Co-founder & CEO, Riley Hughes, shares insights into the process of establishing the infrastructure for deploying reusable identities across various industries and use cases. In this episode, we discuss Utah’s age verification mandate and explore the future of business models for monetizing verifiable credentials.
Trusona Founder & CEO Ori Eisen joins State of Identity for a deep dive into all things passwordless. Learn the most common mistakes platforms make when attempting to move beyond passwords, why stakeholders beyond the CISO must be involved in the conversation, and how platforms can have their cake and eat it when it comes to delighting customers without making cybersecurity risk sacrifices.
Domingo Guerra, EVP of Trust at Incode, joins State of Identity podcast host Cameron D’Ambrosi to discuss why trust underpins digital innovation, how Incode is seeking to differentiate its platform amidst increasing competition, and the most exciting new use cases and verticals for identity-proofing beyond regulated industries.
Stay updated with the latest news, data and insights from Liminal