As we step into 2024, the digital identity technology landscape is poised for more change, presenting unique challenges and opportunities for businesses across industries. Trends ranging from publicly traded identity companies shifting towards privatization, the market’s reception of reusable identity networks, and strategic mergers are reshaping the industry. Additionally, pressing privacy legal issues, the rise of deepfake technology, and the evolution of fraud prevention strategies underscore the complexity of the environment. Expert insights from our team shed light on what lies ahead, revealing a mix of trends and predictions that we believe will strongly influence the digital identity landscape.
Let’s see what the experts predict:
At least two publicly traded identity companies will choose to go private. As market dynamics shift and identity companies cope with the struggles associated with pivoting towards stronger unit economics, we will see a strong effort from publicly traded identity companies to go private. Private equity investors are sitting on $2.6 trillion in dry powder, up 11% since December 2022. Many investors have gotten smart in the industry over the past several years, have targets in mind, and are willing to take on hairy deals.
We will not see a reusable identity network successfully gain consumer adoption. While most market participants believe reusable identity schemes will be successful, companies that could be working on creating these solutions have shifted their focus to ‘revenue-today ’-generating products and services like fraud prevention. We expect to continue seeing verifiable credential companies partner with identity-proofing vendors to enable reusability, but relying parties will still be too expensive to attract in 2024.
At least 3 of the top 10 digital identity companies merge or be acquired to create integrated identity platforms. Thoma Bravo’s buyout and merger of identity giants Forgerock and Ping showed that customers want better end-to-end digital identity customer lifecycle management. In 2023, we saw an increase in strategic M&A activity as large incumbents acquired product capabilities and revenues through inorganic growth. Entering 2024, many of the top digital identity companies are struggling to realize valuations from 2020-2022, and those with capital to deploy will use inorganic growth to acquire revenue, customers, and market share.
By the end of 2024, Okta and Ping will significantly expand into identity verification and fraud prevention, particularly in offering account opening solutions. This shift is driven by recent security breaches highlighting the need for Identity and Access Management (IAM) vendors to authenticate user identities. Okta’s acquisition of Sera Security is a clear move towards enhancing fraud prevention capabilities, signaling a broader trend among IAM vendors to bolster platform security and integrate more deeply into the account opening process.
Organizations struggling to demonstrate their unique data advantage will likely encounter significant market criticism. Despite generative AI showing considerable promise, 2024 is poised to witness escalating debates surrounding AI and fair practices, particularly after The New York Times’ lawsuit against OpenAI. This legal action represents more than just a fleeting response to technological progress; it is set to establish a new standard for data management and governance. In light of these developments, the undeniable importance of proprietary data will come to the forefront in 2024, highlighting the evolving economic dynamics around data and intellectual property.
Major social media platforms will initiate voluntary age verification. This proactive stance is in response to an anticipated Supreme Court decision and the aim to circumvent possible fines. 2023 saw state-level age regulations become the subject of legal controversies, underscoring the necessity for improved age verification techniques. Additionally, this initiative is part of an effort to foster trust among users as platforms strive to show their commitment to responsibly managing their online communities. This move towards voluntary age verification signifies a strategic effort by social media companies to handle the changing regulatory environment and meet public expectations adeptly.
The Supreme Court will strike a blow against state-level attempts to regulate child access to online platforms. State lawmakers in Arkansas, Texas, Utah, California, and Louisiana passed new restrictions on children’s online access to social media sites and age-restricted content. However, Federal lawsuits filed by tech industry lobbying group NetChoice have blocked the Arkansas and California regulations from effect, with appeals expected to reach the Supreme Court at the urging of the US solicitor general. We predict NetChoice’s lawsuits will be successful, with the Supreme Court striking down restrictions on underage access to internet content on First and Fourth Amendment grounds. With state-level action restricted by this ruling, attention will shift to pending Federal legislation, including the proposed Kids Online Safety Act (KOSA) and an updated Children’s Online Privacy Protection Act (COPPA 2.0). While action to increase protections for children online seemingly enjoys bipartisan support, we do not foresee the successful passage of national legislation in 2024 in the run-up to what is expected to be a polarizing and contentious Presidential election battle.
Social media platforms will struggle to stem Deepfakes during the global 2024 election cycle. With the technical skills and cost required to create convincing fake still images, videos, and audio clips reaching new lows, the 2024 elections will see a flurry of deepfake content posted to social media websites. Unfortunately, government and industry efforts to staunch the tide of deepfake content will see limited success. Political gridlock at the federal level and First Amendment concerns will hamstring efforts to make political deepfake content illegal. At the industry level, we foresee limited impact from the Adobe-led Content Authenticity Initiative efforts to root out deepfake content through technical means such as cryptographic asset hashing. With social media platforms such as Meta and X showing limited willingness to censor content suspected of being maliciously manipulated proactively, expect the primary policing of deepfakes to come from users themselves and journalists identifying and highlighting deepfakes already in circulation.
Traditional fraud vendors will increasingly partner with larger identity vendors to expand their capabilities across the customer journey. The growing demand for comprehensive fraud solutions drives this strategic move. A significant 68% of buyers recognize the effectiveness of identity verification in preventing transaction fraud, with only a tiny minority of 6% finding it ineffective. This trend pushes fraud providers to seek inorganic growth opportunities, aiming to integrate their services within a broader platform. The goal is to address various use cases throughout the entire customer lifecycle. Additionally, the demand for multi-functional platforms is evident, as 43% of fraud solution buyers prefer vendors with a robust platform capable of handling multiple use cases across the customer lifecycle. This shift reflects a market evolution towards integrated, holistic fraud prevention strategies.
Most large US financial institutions will begin to adopt FRAML (Fraud + AML) approaches to better understand user profiles and associated risk levels over the next 12 months. Integrating fraud and AML teams into a cohesive FRAML solution has been a topic under consideration across regulated industries for several years and is now beginning to materialize. Today, 62% of financial service solution buyers have consolidated their fraud and AML departments or indicated they want to do so over the next two years.
Continuous monitoring will become table stakes for Business & Entity Verification. An overwhelming 88% of financial institution buyers note that stale data impacts their Business & Entity Verification solutions. Maintaining continuous monitoring guarantees the accuracy of business data, which helps companies reduce regulatory risk. Only about 40% of the top 150 Business & Entity Verification vendors have continuous monitoring capabilities; by 2025, more than 70% will have this capability.
Chrome’s cookie depreciation will significantly increase demand for bot detection. Google Chrome plans to deprecate all third-party cookies by Q3 2024. As companies shift toward first-party ad strategies, there will be a heightened requirement to determine that all website impressions are legitimate. Only about one-quarter of top bot detection solution providers currently include programmatic ad fraud detection. By the conclusion of 2024, the number of vendors providing this solution will double.
Passkeys will remain a secondary authentication modality for big tech. Though big tech adoption of passkey technology has been a significant development in the trajectory of passwordless authentication, the modality remains a voluntary opt-in for end-users. With just 5% of surveyed authentication practitioners indicating that passkeys are their preferred path to passwordless over the next two years, big tech companies will continue to evaluate the market readiness for a full-scale passkey roll-out before moving passkeys into the primary authenticator mix.
Organizations deploying generative AI will deal with privacy concerns. Generative AI technologies will bring forward the need for heightened considerations regarding privacy and security. As AI systems operate on expansive datasets and exhibit opaque decision-making processes, concerns regarding encoding sensitive information become more pronounced. The lack of transparency in understanding data sources, flow, and mechanisms behind these AI systems contributes to uncertainties regarding potential privacy risks.
We will see the addition of new consortiums and frameworks to tackle deepfakes. Regulatory evolution and global collaboration are anticipated to respond to the rise of deepfakes and generative AI. Initiatives like Adobe’s Content Authentication project aim to foster international cooperation and guidelines for detecting and verifying deepfakes. The increasing recognition of this threat hints at an emerging trend of unified frameworks and guidelines.