top of page

Behavioral Intelligence & Cybersecurity Trends: A 2025 Outlook with Andres Andreu of Constella.ai

  • Writer: Juan Allan
    Juan Allan
  • Apr 24
  • 13 min read

The digital realm is not merely a technical battlefield—it is a profound contest of philosophies, values, and ideologies. In cybersecurity, every attack and defense maneuver reflects deeper questions about trust, control, and the nature of power in a hyperconnected world.



As we move further into 2025, the struggle between attackers and defenders is less a finite game of tools and tactics, and more an ongoing dialectic about the stewardship of information and the rights of individuals. This hidden war compels us to confront not only how we secure our systems, but why we value privacy, autonomy, and truth in the first place.


In this interview with Andres Andreu of Constella.ai, we explore these philosophical underpinnings and their practical consequences for organizations striving to protect digital identities and critical assets in an era of relentless and ever-evolving threats


1. What are the primary cybersecurity threats you anticipate in 2025, and how is Constella.ai positioning itself to address them?


Looking ahead to the end of 2025, I anticipate several primary cybersecurity threats will dominate the landscape. These threats are largely driven by the increased sophistication of the technology that attackers have access to. Compounding this is a continuously expanding digital attack surface. These top threats necessitate a proactive and intelligence-driven defense

strategy.


Key threats I foresee in 2025 include:


  • Artificial Intelligence (AI) powered attacks - adversaries will increasingly leverage AI to craft more strategic and convincing attack campaigns. These encompass phishing, the generation of synthetic disinformation (e.g. deepfake media for sophisticated social engineering), and the decentralized automation of the discovery and exploitation of vulnerabilities at speed and scale.

  • Persistent and evolving ransomware - ransomware will remain a major threat, with attackers employing more targeted approaches. Moreover, they will demand higher ransoms, and continue to utilize double or triple extortion tactics that involve not just encryption but also data exfiltration and harassment.

  • Escalating identity-based attacks - as digital identities become more central to accessing resources, we see a surge in attacks focused on compromising credentials through breaches, phishing, and malware (e.g. infostealers). These all lead to account takeovers and/or unauthorized access. The creation and use of synthetic identities for fraudulent purposes will also be a significant concern.

  • Supply chain vulnerabilities - attackers will continue to target weaker links in the software and service supply chain to impact multiple organizations simultaneously. Compounding this is the proliferation of low/no code solutions. Code is being generated without intimate knowledge of what it does or what libraries it pulls in.

  • Cloud security challenges - misconfigurations, Identity and Access Management (IAM) errors, and vulnerabilities in cloud-native applications will remain significant ingress pathways for attackers as cloud adoption deepens and widens.

  • Geopolitical-driven cyber operations - nation-state actors and their proxies will likely increase cyber espionage, sabotage, and disinformation campaigns tied to global political events and tensions.


Constella Intelligence (Constella) is uniquely positioned to address several of these critical threats for its customers. In particular, those revolving around digital identity risk, which are increasingly direct factors in many of the attacks mentioned. Constella's core strength lies in its ability to generate risk intelligence from its massive data lake. The data underneath that

intelligence consists of over 230 billion curated identity attributes and compromised records from across the surface, deep, and dark web. This extensive intelligence forms the foundation of Constella’s customer-facing offerings.


Constella directly addresses some of these anticipated threats by:


  • Combating identity-based attacks - Constella's platform provides deep visibility into compromised credentials and identity information circulating on the dark web and other illicit sources. By offering real-time monitoring and alerts for exposed identities, our offerings empower organizations to detect potential account takeovers, credential stuffing attacks, and the use of stolen data for fraud, directly mitigating the escalating threat of identity-based attacks. Our strong focus on Identity Risk Intelligence allows for a more dynamic assessment of the risk associated with specific digital identities. Ultimately, Identity Risk Intelligence is what is necessary to empower a proactive stance and super-charge elements like Continuous Threat Exposure Management (CTEM) programs.

  • Countering AI-powered social engineering and fraud - with the rise of AI-generated deepfakes and sophisticated phishing, knowing when credentials or identity information has been compromised is vital. Constella's monitoring helps identify if the underlying data used for such attacks (e.g. leaked email addresses or phone numbers) has been exposed, providing an early warning signal.

  • Strengthening digital risk protection - Constella's offerings provide comprehensive digital risk protection by monitoring for things like fraudulent domains and executive/employee exposure online. This is crucial in an era where attackers leverage external digital footprints to facilitate targeted attacks, including those linked to session replays, ransomware, and espionage.

  • Providing actionable threat intelligence - by transforming raw breach data into actionable intelligence through our Identity Risk Intelligence approach, Constella enables security teams to move beyond simply knowing that some data has been exposed. Our data sets and offerings help connect dots, understand the context of some exposure, and prioritize focal efforts, which is essential for effective defense against the complex threats we are seeing in 2025.


In essence, while the threat landscape in 2025 is diverse, Constella's has a strong focus on being the authoritative source for Identity Risk Intelligence, compromised identity, and external digital risk data. This positions the company to be a critical component in helping organizations proactively defend against the growing wave of attacks that leverage compromised identity data and exploit an organization's digital footprint.


2. Based on your experience, what are the biggest challenges organizations encounter when integrating AI into their cybersecurity strategies?


Integrating AI into cybersecurity strategies introduces several significant challenges for organizations. Based on my experience, one of the foremost difficulties lies in understanding and securing the expanded and complex attack surface that AI solutions create. It's not simply about the traditional web application, network, or endpoint vulnerabilities anymore; the AI laye presents entirely new vectors for adversaries.


Firstly, the AI models themselves, particularly powerful Large Language Models (LLMs) are prime targets and can be vulnerable in distinct ways. Beyond general manipulation, organizations face specific LLM attacks such as prompt injection, where malicious instructions embedded in user input can hijack the model's output or behavior. Furthermore, the threat of LLM data poisoning is a critical concern. Attackers can subtly introduce malicious data into model training datasets, compromising the model's integrity and potentially backdooring its responses or capabilities in subtle ways that are hard to detect once deployed. Ensuring integrity, trustworthiness, and resilience against these types of attacks throughout the AI model's life cycle is a challenging area.


Secondly, the software libraries and APIs that enable communication with and between AI engines represent another critical dimension of a modern-day attack surface. These components are often complex. They may have their own vulnerabilities and misconfigurations. This dynamic can create pathways for attackers to compromise any solution that interacts with those conduits to AI engines. Securing these interfaces and the data flowing through them is paramount. But, the challenge really starts with intimate knowledge of what is within a given ecosystem and so things like accurate Software Bill Of Material (SBOM) manifests are very important.


Furthermore, the cybersecurity end-user perspective is a frequently underestimated challenge. Cybersecurity end-users (analysts, engineers, etc) are becoming dependent on the output of AI-powered solutions. The level of scrutiny applied to the output from some AI-powered solution becomes critical. But, realistically does this take place? Or do cybersecurity end-user’s blindly trust AI-powered output because it is coming from the product of vendor X, and vendor X is “trustworthy”. Educating all end-users and implementing safeguards around these AI touch points is crucial.


Finally, at the code level, the development and deployment of AI within cybersecurity tools or as part of the broader infrastructure introduce unique challenges. Ensuring the security of the AI code itself, managing dependencies, preventing code injection vulnerabilities specific to AI frameworks, and maintaining secure development practices throughout the AI lifecycle are complex tasks that require specialized expertise. In many cases an organization's cybersecurity resources don’t really have the ability to ensure these measures are pursued by code being written by a commercial vendor. And so this blind trust model has to exist and it is a challenge.


In addition to this expanded attack surface, organizations also grapple with challenges like the need for specialized AI security talent, the ethical implications of using AI in security, regulatory compliance, and the potential for AI to be used offensively by attackers. However, the intricate and multifaceted nature of the AI attack surface, spanning models (including specific LLM attack vectors like prompt injection and data poisoning), APIs, users, and code, is arguably one of the most immediate and significant hurdles organizations must overcome to effectively integrate AI into their cybersecurity defenses.


3. As digital identities become increasingly vulnerable, how do you see the role of behavioral intelligence evolving in safeguarding both personal and organizational data?


The escalating vulnerabilities surrounding digital identities is undoubtedly one of the most significant security challenges today. With data breaches becoming commonplace, credentials and active session objects (e.g. cookies) are being exposed. Given the ramifications of these exposures, relying solely on traditional techniques (e.g. passwords, MFA, etc) for authentication is no longer sufficient to protect enterprises and their data. This is precisely why the concept of Identity Risk Intelligence is becoming paramount, a topic I have addressed multiple times in my personal subject matter writings. As an example, Identity Risk Intelligence is a critical component in disinformation security: https://andresandreu.tech/disinformation-security-identity-risk-intelligence/


The evolution of proactively safeguarding digital identities centers around the adoption of Identity Risk Intelligence, with behavioral intelligence playing a crucial, foundational role. Identity Risk Intelligence moves beyond simply verifying credentials at a single point in time. It's about continuously assessing the risk associated with a digital identity throughout its lifecycle and across all its interactions. It aggregates data from various sources, including behavioral patterns, but also context like location, device health, threat intelligence feeds, and even the sensitivity of the resource being accessed. All of these variables collectively paint a dynamic and nuanced picture of potential risk. Moreover, they set the foundation for learning patterns that can then be used for outlier detection over time.


Behavioral intelligence is a core engine driving Identity Risk Intelligence. By continuously monitoring and analyzing patterns in user behavior (e.g.typing speed, mouse movements, typical login times and locations, access patterns to resources, endpoint posture, etc) it builds a dynamic profile of what constitutes a 'normal' baseline for each legitimate user. This behavioral baseline is a critical data point fed into the broader Identity Risk Intelligence system.


For personal data, applying Identity Risk Intelligence means that security isn't just about a strong password or an MFA token at login. It means the system is constantly evaluating the risk of the ongoing session based on the user's behavior compared to their history. It means analyzing the reputation of a source IP address and the state of the device being used, amongst many other elements. A login might be permitted initially, but unusual behavior (e.g. attempting to transfer a large sum of money immediately after logging in from a new device in a foreign country) would increase an identity risk score on the fly, potentially triggering step-up authentication, a transaction hold, or alert triggers.


For organizational data, Identity Risk Intelligence, powered by behavioral insights, is crucial for detecting both external compromises and insider threats. By correlating behavioral anomalies (e.g. unusual data access patterns, activity at odd hours, etc) with other risk factors (e.g. known threats, user role changes, access to sensitive systems, etc), a system can identify high-risk situations in real-time. This allows security teams to move from reactive incident response to proactive risk mitigation. It provides the context needed to understand why a particular behavior is risky, rather than just flagging an isolated event after the fact.


As I've emphasized in some of my work, Identity Risk Intelligence provides organizations and individuals with a more dynamic, data-driven, and ultimately more resilient defense against the increasing sophistication of identity-based attacks.


By integrating behavioral intelligence into a holistic risk assessment framework, we can move closer to truly safeguarding digital identities, and the sensitive personal and organizational data they have access to, in an increasingly vulnerable digital landscape.


4. In your view, what are the key factors driving investment in cybersecurity today—regulatory pressure, the evolving threat landscape, or customer demand for digital trust?


Regulatory pressure, the continuously evolving threat landscape, customer demand for digital trust, and enterprise demand for protective solutions are all significant drivers of cybersecurity investment. In my view, it's the convergence and amplification of those four factors, viewed through the lens of market dynamics and investment theses from firms like Forgepoint Capital, Merlin Ventures, and Team8, that truly defines the current trends in spending. It's not a single driver, but rather a powerful feedback loop where each element reinforces the others, creating a compelling case for continuous and increasing investment.


The evolving threat landscape remains a primary and undeniable catalyst. As firms like Forgepoint Capital, Merlin Ventures and Team8 have highlighted based on their investments, the sophistication and diversity of attacks are constantly increasing. They encompass everything from advanced ransomware and supply chain compromises to AI-powered fraud, disinformation security, and state-sponsored espionage. The tangible and ever-present danger, and potentially crippling financial and operational losses, are steering organizations to invest proactively in the pursuit of resilience and the ability to mitigate risk. Most entities in the industry, including investors, understand that breaches are basically inevitable. The goal is to make an attackers work factor high enough that your organization becomes unappealing.


This heightened threat landscape exists within an increasingly stringent regulatory environment. Governments worldwide are imposing stricter data protection and cyber resilience requirements. While compliance can be viewed as a cost center, when done properly it can fundamentally drive a baseline level of strategic security investment. Moreover, as Team8's reports have indicated, evolving regulations are actively driving innovation in areas like data security and privacy. The increasing scrutiny and potential personal liability faced by CISOs, as also noted in industry discussions involving firms like Team8, further underscore the seriousness of regulatory mandates and the need for robust security postures that require significant investment in technology and processes.


Crucially, both the threat landscape and regulatory demands are contributing to heightened customer demand for digital trust along with enterprises demanding protective solutions. Consumers and business partners are more aware of cyber risks and are increasingly making decisions based on an organization's ability to protect their data and ensure service availability.


Forgepoint Capital explicitly recognizes continuous trust and identity management as a key investment theme, underscoring the market's need for solutions that build and maintain trust in digital interactions. Cybersecurity is no longer just a back-office function; it's becoming a fundamental business component and a critical factor in maintaining customer loyalty and competitive advantage. This demand for trust necessitates investment in visible, effective security measures and a demonstrated commitment to protecting digital assets. This includes addressing the unique needs of various market segments, such as the underserved SMBs that Forgepoint also focuses on.


Firms like Forgepoint Capital, Team8, and Merlin Ventures, through their investment strategies, provide a clear market validation of these drivers. Their focus on innovative companies in areas like identity security, cloud security, proactive risk management, and leveraging AI for defense directly reflects the urgent needs created by the evolving threat landscape, regulatory imperatives, and the market-wide demand for digital trust and resilience.


Therefore, while it's possible to analyze each factor individually, the significant and sustained investment in cybersecurity today is best understood as a response to the combined, reinforcing pressure from sophisticated threats, mandatory regulatory requirements, and the critical business need to build and maintain digital trust with customers and partners. This is a reality clearly reflected in the investment priorities of leading cybersecurity venture firms.


5. Looking ahead, how do you envision the collaboration between human expertise and AI systems in strengthening cybersecurity defenses?


Looking ahead, I firmly believe the future of strengthening cybersecurity defenses lies not in the replacement of human expertise by AI, but in a powerful and increasingly sophisticated collaboration between human security professionals and advanced AI systems. This synergy will be essential to effectively combat the escalating scale and complexity of cyber threats, and a key development in this collaboration will be the rise of decentralized agentic AI. I envision AI systems acting as highly capable force multipliers and intelligent assistants for human analysts and defenders.


AI, particularly Swarm AI, is uniquely positioned to handle the immense volume of fragmented data generated across networks, endpoints, and web applications. The volume and fragmentation we see in the industry far exceeds human capacity for real-time monitoring and analysis. Even the capabilities of many SIEM products have proven ineffective. AI fields like Machine Learning (ML) can excel at identifying subtle patterns, anomalies, and indicators of compromise buried within this fragmented and distributed sea of noise.


The evolution towards decentralized agentic AI marks a significant shift in the industry. Instead of relying on a single, monolithic security system, we will see a vast network of specialized, autonomous AI agents working collaboratively. Each agent could be designed with specific expertise. One can focus on endpoint anomalies, another on network traffic patterns, a third on identity behavior, and so on. These agents will have the ability to act autonomously to achieve specific security goals with minimal human intervention. But they will also be able to seamlessly share data and act in unison.


Think of this decentralized agentic AI as a highly distributed and intelligent security team. Individual agents can:


  • Operate autonomously - make real-time decisions and take actions within the domain they have been programmed for. This could be isolating a suspicious process on an endpoint or blocking traffic from a malicious IP that was detected by a peer agent, without waiting for human or central command.

  • Collaborate and share intelligence - agents can communicate and share findings with each other, correlating information across different security layers to identify more complex, multi-stage attacks that might be missed by isolated or slow systems.

  • Adapt and learn independently - decentralized agents can continuously learn from new data and adapt their detection and response strategies in their specific domain, contributing to the overall resilience of a resilient defense system.


This decentralized, agentic approach offers many advantages. It enhances the speed and scalability of security operations, allowing defenses to keep pace with fast-moving threats. It can reduce alert fatigue by handling low-to-medium severity incidents autonomously. Furthermore, a decentralized architecture can offer greater resilience; the compromise of one agent doesn't necessarily cripple the entire system.


However, human expertise will remain absolutely indispensable and will evolve to focus on higher-level cognitive tasks. After all, someone has to program and build those autonomous agents and the network they create. Humans possess the critical thinking, intuition, and contextual understanding that AI currently lacks. The roles of human professionals will shift towards:


  • Strategic oversight and management of AI agents - designing the overall strategy for the agent network, setting goals for the agents, monitoring their performance, and ensuring their actions align with organizational risk tolerance and compliance requirements.

  • Validating and tuning AI models - providing feedback to improve agent accuracy, address biases, and ensure the AI is not generating false positives or negatives

  • Threat intelligence and proactive defense - analyzing the broader threat landscape, understanding attacker motivations, and developing new defensive strategies that can then be implemented or learned by some set of AI agents.

  • Handling ethical and legal considerations - navigating the complex ethical and/or legal implications of autonomous security actions and ensuring compliance with regulations.


The most effective cybersecurity teams in the future will be those that seamlessly integrate decentralized agentic AI into their strategies and workflows. This collaboration will leverage strengths on both sides. AI's speed, scalability, and analytical power will be leveraged for autonomous defense actions and threat detection. Human experts will focus on the strategic management of the AI collective, complex problem-solving, threat intelligence, and the crucial human elements of cybersecurity that require judgment, creativity, and ethical reasoning.


This partnership, where decentralized AI agents act as intelligent, autonomous extensions of the human team, is, in my view, the most realistic and effective path to strengthening our cybersecurity defenses against the challenges ahead.

bottom of page