top of page

Can Europe Compete in AI?: Insights from Hacktron AI’s Zeyu Zhang

  • Writer: Juan Allan
    Juan Allan
  • Jun 30
  • 4 min read

Zeyu Zhang, CEO of Hacktron AI, discusses Europe’s evolving LLM landscape, regulatory challenges, and how Hacktron leverages AI for cybersecurity, offering insights into Europe’s unique approach to AI innovation



As the European Union accelerates its regulatory and infrastructural support for artificial intelligence, European companies are poised to carve out a unique and competitive niche in the global LLM (Large Language Model) ecosystem, especially in sectors like financial services and cybersecurity.


However, regulatory hurdles, data sovereignty, and infrastructure gaps remain significant challenges. In this interview, Zeyu (Zayne) Zhang, Co-Founder and CEO of Hacktron AI, offers expert insights into how Europe’s LLM landscape is evolving, the impact of the EU AI Act, and how Hacktron is leveraging AI to redefine cybersecurity.

How would you describe the current landscape of LLM adoption across industries in Europe? Are certain sectors emerging as frontrunners?

There is a rapid diffusion of LLM adoption across diverse sectors in Europe, with a notable acceleration in the last 12-18 months. However, maturity remains uneven—Europe’s largest enterprises have largely adopted LLMs at scale, while mid-cap companies still lag behind their American counterparts. The financial services sector is emerging as a frontrunner, closely followed by cybersecurity, which is now Europe’s fastest-moving GenAI use case after financial services, supported heavily by the European Cybersecurity Competence Centre (ECCC).

What are the main regulatory or ethical challenges European AI startups face when developing and deploying LLMs?

The EU’s GDPR presents a significant regulatory challenge for AI startups developing and deploying LLMs. Since LLM development relies on vast datasets, startups must ensure careful data governance, employ anonymization or pseudonymization techniques, and maintain robust mechanisms for data subject rights requests (such as the right to be forgotten). There is also increasing emphasis on compute sovereignty, with some public tenders requiring that training and inference remain within the EU. This trend is expected to continue as Europe seeks to secure its digital autonomy.

How is the European Union’s AI Act expected to influence the pace of LLM innovation and deployment compared to the U.S. and Asia?

The AI Act is ex-ante, raising entry costs but sharply differentiating “trustworthy-by-design” vendors. This results in a higher barrier to entry but encourages a more thorough design process. In the U.S., the absence of a federal AI law and reliance on ex-post oversight by the FTC means time-to-market may be faster. In China, the pace of LLM innovation remains high, with regulations focusing mainly on content security.

What role do public-private partnerships and EU-funded initiatives play in supporting the growth of the AI/LLM ecosystem in Europe?

The EU has established a dense web of public–private programs that now serve as the primary scaffolding for its LLM economy. Vendor-led alliances, such as the NVIDIA Blackwell partnership, play a key role in securing robust domestic AI infrastructure.

Do European AI companies have enough access to high-quality data and computing infrastructure to compete globally in LLM development?

Europe’s chronic shortages of frontier-class GPUs and curated datasets are easing thanks to developments like the NVIDIA Blackwell clusters, though these remain the region’s principal competitive drag. On the data side, the Data Governance Act is creating sector-specific “data spaces” (health, mobility, finance) that standardize licensing and technical interfaces, making it legally and technically simpler for startups to train on high-quality, domain-rich corpora.

How are European LLM providers differentiating themselves from U.S. tech giants—through language specialization, privacy, open-source models, or something else?

European providers differentiate less on raw model scale and more on trust, transparency, and linguistic depth. For example, Mistral AI emphasizes the open-weight movement: its Mixtral 8×7B and subsequent models are released under an Apache-2 license, allowing enterprises to run them entirely on-premise and avoid U.S. export-control or API-throttling risks.

Can you explain how Hacktron.ai leverages LLMs to enhance threat detection and incident response in cybersecurity?

Hacktron is building a fully autonomous security researcher. The company leverages LLMs to detect vulnerabilities before they reach production and produces industry-leading offensive security research and penetration testing with AI assistance. As models improve, state actors will soon have access to “AI hackers” that scale far beyond what was previously possible with human capital. Hacktron acts as an equal and opposite force, working on behalf of the good guys to keep customers ahead of the curve.

What sets Hacktron.ai apart from other AI-driven cybersecurity solutions in terms of innovation and practical deployment in Europe?

Hacktron approaches the security problem with the understanding that the scaling laws of AI agents are constrained by knowledge. The ability of AI solutions to find and exploit novel vulnerabilities correlates directly with the amount of domain-specific knowledge they access. Hacktron’s team consists of seasoned security researchers, CTF champions, and bug-bounty leaders, offering a depth of expertise rarely matched by competitors. The company is focused on distilling the most relevant industry expertise into its AI agents, aiming for tight integration into the software engineering lifecycle and full-stack operation across the development cycle. For practical deployment in Europe, robust data protection is paramount—Hacktron’s current offering ensures customer data is not retained on cloud servers by model providers.

Comments


bottom of page