top of page

Generative AI: From Innovative Tool to Vector for Data Leaks

  • Writer: Juan Allan
    Juan Allan
  • Sep 11
  • 3 min read

As José Amado, Cybersecurity Outsourcing Manager at SISAP, explains: "The risk lies not in artificial intelligence as a technology, but in the way organizations implement it"


ree

Over the past two years, the adoption of generative artificial intelligence has skyrocketed in companies around the world. Tools such as ChatGPT, Gemini, and Copilot are no longer technological curiosities but have become everyday work tools: they are used to write reports, analyze data, create code, and even design strategies.


But enthusiasm for innovation has opened up a critical gap: the inadvertent leakage of sensitive information. Verizon's 2025 Data Breach Investigations Report (DBIR), compiled with data provided by leading cybersecurity organizations such as SISAP, shows an alarming trend: 15% of the companies surveyed allow their employees to access generative AI from personal accounts, without clear controls or policies, and in 72% of those cases, workers share confidential data without being aware of the risks.


The double side of innovation


Generative AI is an engine of efficiency. It allows you to speed up repetitive tasks, generate content in seconds, and democratize access to advanced insights. However, this same power brings with it an invisible threat:


  • Data entered into AI platforms can be stored, used to train models, or even leaked.

  • The absence of corporate policies encourages improvisation: employees who copy and paste financial reports, source code, or customer information without thinking about the consequences.

  • Attackers are already exploiting AI: from more sophisticated phishing to malware written in natural language.


As José Amado, Cybersecurity Outsourcing Manager at SISAP, explains:


"The risk lies not in artificial intelligence as a technology, but in the way organizations implement it. Without clear rules, AI becomes a back door through which a company's most sensitive information can leak out.".


The real threat: data leaks and corporate espionage


The DBIR 2025 also highlights that digital espionage motivations grew by 163% in the last year. In this context, unregulated generative AI is the perfect scenario for leaking strategic information, either accidentally or as part of advanced social engineering campaigns.


A financial executive who asks an external AI to project cash flow, an IT analyst who analyzes logs in a public chat, or a lawyer who uses ChatGPT to polish a contract may be exposing data that then circulates outside the organization's control.


Towards a culture of cybersecurity with AI


The challenge for business leaders is not to ban AI, but to govern it. Organizations that manage to harness its potential with clear policies will be the ones that make the difference between safe innovation and strategic vulnerability.


José Amado offers these key recommendations:


  1. Define corporate policies for the use of generative AI. Establish what data can and cannot be shared.

  2. Provide internally approved tools. Set up private or secure AI instances for business use.

  3. Train employees. The human factor is still present in 60% of breaches; training is key to reducing risks.

  4. Monitor and audit interactions. Periodically review how and with what data AI is being used in the organization.

  5. Include AI in the risk management plan. Do not treat it as a "technological novelty," but as an integral part of the cybersecurity strategy.


"Companies that incorporate artificial intelligence responsibly will be one step ahead. Those that don't will see their most valuable asset—information—become the weakest point in their digital strategy." – José Amado, Cybersecurity Outsourcing Manager at SISAP:

Comments


bottom of page